I think this is a great idea. I've had to solve this on almost every web app I've worked on.
I've just gone though your `example.csv`, and have this feedback:
- the (row) "Remove" action seemed like data column to me, because it is placed adjacent to the data. I would separate the actions completely from the data. I'd probably avoid the "X" within the column, as again, it looks like data. Admittedly, users probably know the content of the CSV they are uploading, so this might not be too big a concern. But the UX/semantics of styling the "actions" the same as "data" seems like it will lead to confusion. At the moment, the only action is "Remove", so I might drop the Remove "column", just put a button labelled "Remove"; after all, the "X" is already a "button" (as a link). That said, bulk actions a chore to implement and so I suspect getting this working nicely will probably be a big draw for your target market. Whether it's checkbox boxes with a single select drop down to apply an "action", or a select drop down on each row (painful UX, I'd think). Hopefully you already have a revision on your roadmap.
- The wording "File has 1 invalid rows. Please resolve the errors before uploading." is occurring after the user has already uploaded the CSV, so I think it should be adjusted. At this point in the workflow, I'd probably stop referring to "File" (the user is only concerned with the data at this point). I'd suggest more succinct wording, perhaps: "1 invalid row must be resolve before continuing."; or "...before you may continue.", if you prefer using the second person in the app messages.
- The full screen modal for a small number of rows forces the user to hunt for the buttons and mouse a far distance. I realize for most data, the modal is likely to be the entire screen (and multi-page), but nevertheless, I would probably make the modal shrink to the data. As a consideration for your market: I've written apps which use CSV for only ~25 rows, and I would still consider using this because the user interaction for sanity and clean-up was still code I would've prefer to skip writing.
- I would increase the number of rows permitted at the lower tiers. Maybe you are analyzing the imports and have a lot of information about the pricing breakpoints and segmentation, but my hunch would be that fewer imports with more rows might entice people. I can think of apps which might have only 5 or 10 imports per month, but need more than 50K rows per import, but it's a pretty big jump to Basic, so you might miss those customers.
- I would make the corrected/edited CSV available for download (by the customer) at the end of the import. If anyone needs to re-import, it will surely be annoying to re-correct it during import; or even to remember all the corrections they made during import and go back and correct their original spreadsheet (or other data source).
- Would be nice to see PostgreSQL on your destination roadmap :-)
Again, this seems like a really good idea to me. I wish you success!
I just thought of one more comment: have you tested the UI/UX when there are enough columns so that the modal must scroll? In that case, the actions adjacent to the data will also probably not work very well; i.e., I would not make the user horizontal scroll to access the actions. Meaning, I think you'll definitely want the actions separated from the data, so that the data scrolls while the actions remain fixed position.
Sorry I don't have more time to experiment with actual data types. I'm sure you can do a lot here too. I once dealt with an app that imported a CSV with geographic coords [lat,long]. During import validation, we showed a map to allow for correction / precise placement. That kind of richness would be great. To be validated with your market, of course.
We have tested with 30 columns. The horizontal scrolling works alright, but as you have rightly pointed out, the actions column is right at the end. We will have to work something out.
We plan to add advanced validation capabilities as and when we get use cases demanded by real customers. Its a huge project in itself!
However, the "restraint control module," which stores data such as whether seatbelts were in use, how fast the vehicle was going, acceleration information, and airbag deployment, although damaged by fire was recovered and turned over to the NTSB's laboratory to evaluate.
Having an extensive background in *nix and network systems through the 90s and early 2000s, image maintenance is the main reason I have avoided docker until recently (just not having time to investigate). One major reason to pay an OS vendor (e.g. RHEL) is to "outsource" security and systems-integration testing, so that ops people can simply "update" (the entire dep-tree of the distribution). I don't want to bring all that effort in-house, especially if I'm still paying for a vendor service contract. That said, I agree security and deployment can be eased by any form of "container" (in numerous OSes) because ops people can black-box the software while managing resource allocation, etc.
How are people handling container maintenance?
For example, I could imagine modifying SRPM spec files to also build a container (possibly even statically linked binaries inside). Then I can vendor update, and rebuild all the containers I need from SRPM; not much more complicated than `rpmbuild postgressql`, and re-deploy the emitted postgres container.
You can do this today with DWave, Rigetti and Xanadu. I've used D-Wave myself, signing up for their LEAP platform is inexpensive (possibly free for a small amount of use, I'm not sure about their current offering).
There are different implementations of "quantum computing" which are appropriate for different types of problems. Google and Wikipedia will teach you about them, for example quantum annealing vs. universal quantum computing.
If you want to use different versions of Python itself, without installing them as different named binaries or using shell aliasing, etc. Helpful to install Python 2.7, Python 3.x, 3.y, etc. and invoke each as simply `python ...`.