You are thinking of normalized ( bcnf if not 3nf) well architectures application storing structured data , unless the app is 100 million+ users or grew super fast 250TB size would be hard to get to .
Timeseries (like IoT you mentioned ) or binary blobs or logs or any other data in SQL storage that shouldn’t be really there can hit any size wouldn’t be all that interesting.
Can’t speak for OP, however managing data for few million user apps, what I have observed is most SQL stores hit single TB range and then start getting broken down into smaller dbs either coz now teams have grown want their own Micro-service or DB or infra wants easier to handle in variety of ways including Backup /recovery larger DBs are extremely difficult to get reasonable RTO/RPO numbers for.
You wouldn’t say less to worry about when you have to do full backup or show recovery from backup within a set recovery time .
This one data store is easier is a myth , it just offloads complexity from developer to infra teams who are now provisioning premium NVMe storage instead of cold object stores for binary data .
Binary data is not indexed or aggregated in a SQL store there is no value in doing this is one place expect dev experience at the cost of infra team experience.
Is it IoT / remote sensing related?