Large amounts of historical data are not very useful in the current stats UI in HG, and it can also impact performance. Therefore, stats data is treated more like "temporary" or disposable data. However, it would be pretty useful to keep this data for use in an application better suited for data analysis, such as Excel.
I was personally thinking of building a quick method to export the DB to some format (like csv, json, etc) so it can be analyzed in another application. Maybe something like this could be leveraged to provide export + import functionality. In the end, it all comes down to how much free time the developers have, the difficulty of the feature to implement, and the user demand of said feature.
For import, here are some possible issues:
* You might need to implement de-duplication functionality in case user uploads duplicate or overlapping data.
* Change of DB schema across HG update: If schema changes, data will no longer be compatible. At work, my solution to a similar problem was to embed the data schema version in the export file. During import, I call custom-built "transformations" based on the version found in the file. These transformations convert the imported data to a compatible format (if possible). In my code, any time I change the DB schema, I add a new transform function.
* Impact of large import on running HG app: Due to the demands of such a process and the potential issues of importing huge datasets while the app is running, it might be a safer idea to build this kind of function into the HG service launch/manager application. You would need shell access to the HG server, but at least the service could be halted during import and started back up after.
* Historical data less useful in current stats UI: It might not make much sense to even support import at this time. In the future, if the stats UI is changed to support more analysis and/or going back in time to see previous days consumption (not just averages), import would be much more significant.
For export, here are the issues that must be addressed:
* Memory consumption: I'm not 100% sure how SQLLite handles concurrency (DB writes during long DB read), but I assume it just loads the query result entirely into memory where it can be read later without keeping DB cursor open. If you are exporting a 50MB database, this could cause a memory issue, especially on devices like RaspberryPi.