Skip to content

Hours to upload CSV with 50k records...

edited November 2012 in Troubleshooting
When I started with version 1.1.0 I was able to upload a 65k record CSV file in about 15ish mins -- this was from a "micro AWS instance". I moved sendy over to one of our "medium AWS instances" which is running the latest 1.1.1.4 and last night it took almost 3 hours to fully upload a 50k record CSV file. Not sure if something has changed or if the "delay" is in already having about 200k records in there and the checking against all these?

None of our CSV imports use the "custom fields" by the way.

FYI, our database is an RDS Large instance that is normally only utilizing about 20% cpu and the import spiked up the CPU on it to about 85%-90% during the import. With the way it is spiking the CPU on the database it seems like something needs to be optimized.

Thanks,
Jeremy

Comments

  • The only thing changed is checking for invalid emails.
  • Ben,

    Hmmmm, thanks for the reply. I decided to check the database tables and noticed that it didn't happen to have any "indexes" on the tables, outside of the "auto-increments". The import then would take a long time and be really hard on the database since every query it would have to do full "table scans" of all the "subscribers" rows which is close to 200k records in our case. This would also make sense as to why it was quick in the beginning since there were zero records overall.

    Did an earlier "install" happen to have the table indexes and we missed them -- remember I did my install from 1.10 so it could have been in a previous version install/update.

    Thanks!
  • Hi Jeremy,

    Creating an index will slow down UPDATE and INSERT especially when adding and updating subscribers happens more frequently than importing a CSV. But we'll still consider that. Thanks!

    Ben
This discussion has been closed.