Thank you to everyone for your replies! While I wish I could respond and thank everyone, I also wouldn’t want to make this thread unwieldy. All in all, the solution I decided to go with was to use a NoSQL key value storage service. Using any type relational DBaaS, or a VM instance with my own db running, all started out at around 10 USD/month, and even then my needs barely scratched the surface of the quotas.
For those interested in the finer details:
I went with Firestore which is provided by Google Cloud Platform. I initially looked into AWS’s DyanoDB but I found it all so overwhelming when it came to understanding pricing. With GCP things are relatively straightforward in that arena – there is no need to setup any billing account to try things out in the free tier (the API will simply throw an exception if you max out your quotas) and you have the option to easily set limits in place when you do add billing info. That said, the free tier is pretty generous. I don’t know how competitive the pricing was, but I feel that any difference would be immaterial for my needs anyways.
As for normalizing that for document/collection storage, this was my biggest appeal since there are clever ways to design the objects for optimum storage. In the case of GCP, you have a 1MiB per document limit and the are very transparent with how much bytes each data type takes up. For example, I was able to bring the total writes of a ~1.2M row timeseries database to just 805 entries. Very different compared to the traditional relational SQL paradigm.
I don’t advocate one over the other, but for my purposes, this was the ideal option.
Let me know if you stumble across this and have any questions!