Hacker News new | past | comments | ask | show | jobs | submit login

I haven't done any benchmarking, but for around 200,000 rows of data, which I am handling daily at work, the following line of python code takes around 10 seconds.

pandas.read_json(json_obj).to_sql()

Any transformation of the data before writing into DB is split second with vectorized pandas operation.

Pandas is still my first choice when it comes to tasks like this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: