Hi,
Is there a repository of BDMs we can view? It would be great to see how othres have organized their data.
Log in to ask Leonard Kang questions publicly or anonymously.
Hi,
Is there a repository of BDMs we can view? It would be great to see how othres have organized their data.
In the Generic Writer config, I want to pass a hardcoded parameter into JSON body like:
"archived": "false"
Someone suggested using: "request_data_wrapper": "{\"archived\": \"false\"}",
but this did not work.
Any suggestions?
Here is my code
{
"path": "https://api.10000ft.com/api/v1/projects/[[project_id]]",
"mode": "JSON",
"method": "PUT",
"iteration_mode": {
"iteration_par_columns": [
"project_id"
]
},
"user_parameters": {
"#token": "KBC::ProjectSecureKV::xxxxx"
},
"headers": [
{
"key": "auth",
"value": {
"attr": "#token"
}
},
{
"key": "Content-Type",
"value": "application/json"
}
],
"json_data_config": {
"chunk_size": 1,
"delimiter": ".",
"request_data_wrapper": "{\"archived\": \"false\"}",
"infer_types_for_unknown": true
},
"debug": true
}
Hi,
I have a company who receives occasional large csv files 90M records from their clients. When I uploaded the CSV using the FTP extractor (file was on an FTP server), it took 1.5 hours to upload.
What would you recommend to shorten the 1.5 hour process? Use a different method to store the file? Use preprocessors? etc..?
Please note that this process needs to be automated and I can instruct the company to adopt a new process to accepting large CSV files from their client.