I haven't done any benchmarking, but for around 200,000 rows of data, which I am handling daily at work, the following line of python code takes around 10 seconds.
pandas.read_json(json_obj).to_sql()
Any transformation of the data before writing into DB is split second with vectorized pandas operation.
Pandas is still my first choice when it comes to tasks like this.
pandas.read_json(json_obj).to_sql()
Any transformation of the data before writing into DB is split second with vectorized pandas operation.
Pandas is still my first choice when it comes to tasks like this.