Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do you handle user-generated content in your apps?
59 points by view on Aug 23, 2021 | hide | past | favorite | 87 comments
Hey HN,

My name is Melvin and I am currently working on an MVP for a web service called View, to make it easier for developers to upload, process, and deliver media in their apps.

The idea came to mind while I was working on a photo-sharing app and noticed first-hand how needlessly complex and expensive existing services are.

I wished someone would create an easy-to-use and affordable API/SDK ala Stripe but for audio, video, and images.

Is this something that was a pain point for you? I'd love to hear about your experiences building apps with user-generated content.

Cheers



I'm using AWS' presigned URLs to let my users upload directly to an S3 bucket, with a lambda function to generate thumbnails, I keep track of the user's uploads through my app's database. This is as close as I imagine an "affordable API for audio, video, and images".


Thanks for the feedback! How long did it take you to implement this and are you also encoding videos for adaptive streaming?


I had never worked with such API before so it took me roughly two days, the AWS documentation is abysmal, and I could've one that in just a few hours if only they cared about documenting their APIs better. My biggest mistake was going for a kind of presigned URL that was not giving me much control over the size of files sent (I don't find it anymore for some reason, looks like the documentation changed). Go with PresignedPost, it allows quite a lot of control and it's an amazing way to avoid load on your servers.

>are you also encoding videos for adaptive streaming?

Can't help you with that, I only worked with images, best of luck!


I have played around with AWS Elastic Transcoder: https://aws.amazon.com/elastictranscoder/

If video is of interest, I'd definitely take a look at that and see if it meets your needs. It's one of those specialized serverless offerings from AWS that you pay as you use.


I would love a "Backblaze of user generated content". The existing players in the market are way too expensive (likely because they run off the cloud cartel and pass along their bandwidth tax). Basic image handling isn't too hard to deal with on your own, but video and audio is a huge pain. The uploading (which needs to be fault tolerant and resumable), encoding (which takes a long time), storage (which is large) and playback (which requires a half-dozen different formats) is all very annoying to deal with. So much so, for my SaaS products, I only allow my users to upload images!

I had the exact same idea as you, and shelled out a few hundred dollars for a domain from a squatter. My prototype basically reinvents fault tolerant resumable uploads (like tus.io). On the backend, it streams the file to Wasabi and Backblaze. That's as far as I got. Video/audio scares me, but I'll get to it eventually.

I really like the content moderation as a service (via AI, or humans) idea that others have mentioned.


    an MVP for a web service called View
Nitpick: if someone types in "user-generated content view" to their favorite search engine, they're not likely to find you.


One has to admit the modern trope of naming everything single generic words is growing increasingly tiresome. The only thing keeping every developer conversation from sounding like complete lunacy is careful capitalization.


Yeah, I guess you're right. Do you have experience with SEO? What would be the best way to improve search ranking?

I suppose running a blog with good content might be helpful so that someone might find the service using a search term like "video streaming api" or similar.


How the hell was “view” available as an HN username til 47 days ago!?

On topic, you could try attaching a domain specific word to the end until you’re big enough to take over the generic one, ala Flock Freight, Cured Health, Glow Credit etc. Lot of those lately.


Appreciate the suggestion, thanks! I got the domains view.dev and view.page, what do you think of those?

View.com was taken. They're selling windows, I mean actual glass not the OS.


No way you will ever come in the top 10 results even if user is specifically searching for your product. View is a too generic. I search for rust docs etc a lot and I still see a lot of results of generic rust on iron or cleaning products for rust etc.


I was thinking about this and one thing i came up with was "Stowage." An airplane has stowage so it's "users" can store their "content" then retrieve it. The first page is all dictionary results so it seems pretty unique.


Agreed with the others in those still being too hard to find. In fact, the opposite of what I recommended would be better since “view” is the domain word here, maybe use a distinctive, kind of unrelated ending word instead?

If you’re set on the tld being part of the name, from a quick skim of available Google domains, something like “view.haus” would work a lot better, but there’s a viewhaus in GDL Mexico already taking up eg @viewhaus and viewhaus.com.

There are a lot of tlds though!


An alternate spelling could help with this; how about 'Vue'? \s


Not 21st century enough, it needs a deleted E followed by an R. I propose Viewr.


1-800-eViewrTronic.comCoinApp


Let's go retro, with an "i-prefix". So "iVue".


Oh you mean the React ”killer” Vue?


Numerous namespace conflicts.


Vuew.


Veew? :-D


Vuwu?


FYI - the hardest part of user generated media is probably the moderation aspect.


That's why we built our Attribution Engine [0]. We help platforms to deal with CSAM, IBSA and any toxic content and copyright.

The way it works is that platform uses our SDK through which they "send" all uploaded content. The SDK generates a fingerprint that is sent to our service and a license is issued. The license is non/permissible, so they instruct platform and the uploader (creator) what to do. We also provide payment distribution, usage reporting and ADR (alternative dispute resolution).

If platform uses our service, we indemnify them from any liabilities under DMCA, EUCD and others up to $50M in damages (legal cost and/or court orders).

We charge % of revenue generated by the platform for our services. If platform generates no revenue, the service is free. We don't charge per lookup nor there is any scale limit (well, there are throughput limits, but they are quite far for most platforms [currently we can search around 1.1k hours of content every second]).

We cover video, sound recordings and compositions. Images are coming late this year and text sometimes next year.

Just a quick note on the moderation. The benefit of our structure is the moderation is "outsourced" back to law enforcement and gov agencies, like NCMEC and FBI in US, Bundespolizei in Germany, etc.). This means platforms don't have to hire people to moderate the covered content, because the liability is transferred back to the organizations that are create it in the first place.

[0] https://pex.com


That sounds interesting, but complicated.

We built our adminless Internet forum [1] for the same reason. It's splendid on the dev side because initially we built traditional moderation tools (they still exists in the Git), but then we deleted it all and it felt much simpler.

Still waiting to see if it will work from a moderation standpoint. The site's been live for almost a year with no issue yet.

Edit: It's a text only forum, which makes this a lot easier. I was thinking about allowing images but with a high per image fee.

[1] https://www.peachesnstink.com


It's complicated, because there are legal requirements across many different jurisdiction. The product was built to explicitly solve for the liability issues platforms are facing.

I like your solution. Up until it works for you, that's great.


Thanks! What do you think is the most common moderation problem? NSFW media?


First the pornography comes, then the child pornography. That can even be minors sexting which is not a criminal network but it can still get you in trouble.

Politics has a way of turning into violent threats, pictures of nooses, etc.

Then there is the spam, actually that comes before the pornography.


* Urge to acquire wealth (spam, scams)

* Urge to fuck (porn)

* Urge to expand tribe (politics)

That just about covers the roots of all human (and all animals in fact) evils, doesn’t it?


I think those three points can further be distilled down to: greed


It's really just moderation in general. It's a long term, never ending issue. I think the big platforms employ thousands of moderators.


There's a reason there's been so much research into AI moderation, although so far it always seems to either be useless or throwing babies out with the bathwater. Not to mention that without human systems to review what the AI decides (or even with) it tends to appear to users as automatic censorship (and they're usually not wrong about that).


The difficulty of it is the imbalance.

Once I had 10,000 NSFW images in a collection of 1,000,000. You might say that 99% of the images are good and for most classification problems 99% accuracy would be wonderful.

But it's not good enough.

You might find some simple tricks that eliminate 90% of those NSFW images and now you have 99.9% accuracy. That makes you look like a genius in the machine learning world.

But it's not good enough. Even 99.99% accuracy (100 bad images) is not good enough

Even if somebody found one NSFW image you could get kicked out of Adsense. One child porn image uploaded to your site won't put you in jail but it is still completely unacceptable.


We had pretty good success on https://paint.wtf (AI-judged Pictionary game) using CLIP for content moderation. Feel free to probe it a bit by drawing a dick or something on one of the prompts; it seems to do a pretty good job (if it doesn’t work please hit the “Report” button and let us know).

Wrote about it here: https://blog.roboflow.com/zero-shot-content-moderation-opena...


I think a large part of the issue could be solved through taking a more "common sense" approach to product design. Imagine you offer a platform like Facebook and I registered a few hours ago. Is it really smart to allow me to start uploading dozens of images or livestream to potentially thousands of existing users? Surely a better approach would be for my account to have restrictions on what I can do and the number of times/period for which I can do it. Once I become trusted in the eyes of the platform those restrictions can start to be loosened.


A of 10 May 2021 at Facebook, over 15,000, suing for mental health trauma:

https://www.nbcnews.com/business/business-news/facebook-cont...


We've been studying this problem for a product we're working on, and it seems like the depths of this hole get dark and very deep. We expect to need to hire someone to moderate early on.

As mentioned, there is child porn, various other forms of illegal porn, illegal violence, hatred, propaganda to incite violence, etc... Then of course any kind of content you might not approve of due to any personal or company policies, if relevant.

All of our material will be forced to be public so the user should have no expectation of privacy, but for many people this won't matter. Just look at Facebook; that's a public-facing service, and people upload atrocities to it constantly.


The current discussion seems to fall under "Trust and Safety". I'm not aware of any specific guidance or organised thought, though I'd be really surprised if there weren't academic coursework or professional training beginning to appear.

You'd do well to look at established services. Craigslist's list of prohibited content, the T&C of Facebook, Twitter, Reddit, etc., are going to be useful.

Just off the top of my head:

Cyberstalking, pornography, child porn, piracy, malware, fraud, illicit goods (guns, drugs, black/grey market, stolen goods), intimidation, gangs, bullying, alcohol, tobacco, prescription medications, hoaxes, various ineefective / "alternative" products and remedies (which themselves run the gamut of legality, even defining this is at best difficult), advertising, advertising for protected or regulated sectors / goods (housing, employent, personal and professional services, beauty care, escorts, security services, licensed professional, ... As with the goods section, this rapidly gets complex), legal services / aid, political activities, fomenting revolutin / freedom fighters.

User-generate content is a massive concern.

One concept I'm seeing getting increased traction is a focus not on the quantity of posted content but the prevelance or level of access or views. Facebook and YouTube especially are increasingly discussing problematic content not in terms of posts or videos, but of views or presentations of those.

This ... starts making trade-offs in moderation much more viable, principally because there is an inverse logarithmic relationship between the number of items and the views: If n items gets n views, then 10*m items get n/10 views. Very roughly.

This means that you can set a goal in terms of the number of items viewed (and see what the maximum unmoderated prevalence will be), or target a specific prevalence and determine how many reviewers will be required.

For human moderators, the number of items reviewed per day seems to be in the 500--800 range. Note that 800 items/day in 8 hours is 100/hour, or 1.6 per minute, or 36 seconds per item. That's inclusive of breaks, overhead, and non-moderation tasks.

Moderation itself is a very psychologically loaded task. You'll either want to rotate people through it from other functions, or see a heck of a lot of staff turnover.

If anyone has greater insights from one of the current large UGC services (FB, Twitter, Instagram, Whatsapp, TikTok, Imgur, Reddit, etc.), I'd really like to know what current internal practices are.

Some of my previous work had some incidental exposure to this area (I was tasked with removing identified content, working on both our internal and external CDN provider to do so). After a couple of spot checks to see I was unlikely to be deleting content which wouldn't meet removal criteria, as in literally two, I decided I simply didn't want to take the risks of performing additional checks. My removal process turned out to be quite effective --- what the CDN provider's specs suggested might be a weeks-long process removed some millions of items over a weekend. That was on what is by current standards a very modest-sized social network.

I've written on this previously citing YouTube and Facebook sources here: https://joindiaspora.com/posts/f3617c90793101396840002590d8e...


I think you just described https://cloudinary.com/


We use Cloudinary at work and I would recommend it to anyone, I think it's an awesome service and the API is really easy to use.

They offer a pretty generous free tier for personal stuff as well, although I wish they had some plan that was between the free tier and the $99/month cheapest one, which is quite a steep increase from paying nothing.


This came to mind for me as well, as it's something I landed on in the past for managing UGC. The primary benefits being a decent api and administrative interface for moderation. It's certainly not perfect, but there's not a lot of competition in the space that I could find. I think most devs lean towards DIY.


Or https://www.simplefileupload.com/ (good if you're on Heroku) or Uploadcare or any of the many other services that help do this kind of thing.


Imagekit.io is good too


And filestack.com


How do you handle user-generated content in your apps?

Very similar to how I'd handle toxic waste. I'd touch it as little as possible, and ideally I'd like it to be someone else's problem.


The app store has requirements for dealing with user-generated content. The biggest pain points for me isn't with enabling users to upload content but instead with the moderation around it. One user might want to block another user and filter out any content they produce. Or we may need to manually review and delete content that a user has reported. That's the biggest pain point.


The moderation of UGC is something that's killed off a number of my own ideas, at best 'parked' them. CSAM only goes so far, and frankly it's just too reactive - proactive defence is too costly for self-funding early stage startups such as myself.

Building the upload/delivery stream was easy, it's all the 'needless complexity' that adds value. You know, like privacy controls, access control, moderation, image formats and optimised delivery, indexing, search, tagging, etc.

I'll probably reimagine it and find better ways of hosting content but with my vision and UX. Maybe Cloudinary as folk have suggested, maybe some other SaaS DAM product with a solid track record - I doubt I'd trust an early stage MVP if my customers need to rely upon it. Too risky for my tastes.

Now if there was a moderation-as-a-service (MaaS?) that would have uses at the right pricepoint.


Exactly why we built our Attribution Engine [0]. We focus not only on the technology, but the liabilities platforms gain from their operations.

[0] https://news.ycombinator.com/item?id=28279105


A pain point that I'm considering is building a video hosting service but at 0.3x - 0.1x price point of existing ones. Hosting, encoding & streaming video is expensive. The "pennies" that another user mentions add up really fast.


I've worked in that space, the main problem is that your natural audience (small-time producers) tend to have small budgets and short-term needs. So you spend time finding them but they don't pay much and not for long.


This is also what prompted YouTube to derive revenue from advertising rather then charging content creators who want to upload video. In the case of YouTube, power laws have inverted the relationship with content creators: it's in YT's interest to not charge at all. A free to use / consume platform has turned YT into a commons which captures large audiences which drive advertising revenue.

Vimeo went into the opposite direction. They have a tiered pricing model charging for the use of the features and tools they offer. However, they pivoted away from catering to a large and diverse audience. Instead they focus on B2B communication. Vimeo is excellent for professional videographers or marketing agencies publishing video content.

The take away here is that the expense of processing and publishing video isn't going to lower dramatically in the short run. As hardware became ever more powerful over the past 2 decades, the demand for high-quality video has paced along in lockstep: 1080p, 4K,... So, the costs associated with hosting high quality content haven't dropped significantly in that regard. There's not much you can change about that.

What you do control is the business model you develop to cover the costs of hosting content. And that means finding a profitable market and asserting a good product/service fit up front, if you can.

Instead of building an app "to make it easier for developers to upload, process, and deliver media in their apps", OP ought to think beyond developers, and rather direct themselves to those who are either producing or consuming content.

Another way of thinking about this might be: Which problem really is getting solved? And whose problem really is it? Is it hosting audiovisual bitstreams? Is it processing? Is it just providing an at-the-ready API which allows users to just easily upload and embed audiovisual material everywhere without even requiring any technical knowledge?

The latter, by the way, already exists in several forms on the Web - e.g. https://oembed.com/ - which is already implemented in e.g. WordPress and supported by large social media platforms such as Instagram and YouTube.


Maybe we could join forces? I'd be happy to chat with you or anyone interested. My email is admin at view.dev


You could sell moderation as additional service, either AI based (cheaper) or employ actual people. Otherwise, developers probably should be ones responsible for moderating media uploaded through their apps.


> Otherwise, developers probably should be ones responsible...

How do you explain Google somebody else is responsible when they decide to nuke your app?


I think it's important to recognize here that human moderation in many cases means exposing people to potentially traumatizing imagery. Gore, child porn, you name it. I think providing this service ethically could be very challenging.


My "pain point" is a bit earlier in the process:

Auser authentification.

What is everyone using for this? How do you turn a static website where a user can set some configurations (say the color scheme) into a site where the user can log in and save their settings?

In the past I rolled my own solutions. But for new projects I am considering to use a library or framework.

I guess Django, Flask, Laravel, Symfony and Express all come with some default auth mechanism. How is HNs experience? Are you using these? Are you happy with them?


It was funny that I built a "user management" framework that was open source in 2001 and nobody cared except for crackers.


We use Django's authentication, authorisation, and session management and it's rock solid. I can highly recommend it.


Did you have to create all the usual routes and views for the typical auth flow like signup, login, logout, change-password etc yourself?

I did a quick test like this:

    apt install -y python3-django
    django-admin startproject mysite
    cd mysite
    python3 manage.py migrate
    python3 manage.py runserver 0.0.0.0:80
And it serves a Django site but there seem to be no routes for users to sign up, log in etc.

The urls.py file contains only this one route it seems:

    urlpatterns = [
        url(r'^admin/', admin.site.urls),
    ]


Django provides views for all of these things that you can inherit from to customise if you want.

We happen to deviate from Django's norms enough that we've written our own, but there are a lot of benefits to basing your own on Django's because they have already solved some more subtle security issues like rotating session keys at the right stages.


I digged a bit deeper now and see Django indeed offers default views.

But it does not offer default templates, so you still have to write those from scratch, right?


I love using Firebase Auth: https://firebase.google.com/docs/auth


I looked through the docs and have some issues with it.

First of all, it is a service. That opens a can of worms on its own. The main issue being that you are 100% at the mercy of the service provider. Every time they decide to change their API, it adds maintenance work to your project. Sometimes it happens on short notice which can be very annoying.

Second, it is a strange mix of service and code. I don't see an easy way to use it without their Javascript SDK. And installing that SDK looks complicated from what I see.


To use it without the JS SDK, you need to create a server which integrates with their API. This usually means pulling an SDK in on your server which is often okay, though I did find their Go SDK (https://github.com/firebase/firebase-admin-go) rough on the edges. We ultimately abandoned using the service... I think if your integration requires any fine tuning or integration with other services, you might want to look elsewhere.


My team tried Firebase Auth and while I think it's stellar for some use cases, I'd warn people about some potential issues:

- Firebase is a juggernaut of a library on the frontend: https://bundlephobia.com/package/firebase@8.10.0 Our product's main feature is being fast - it has the word "instant" in the name. For a product with users around the world, many of them will feel that network request and/or increased execution time. This is fine at the prototype stage or for teams which don't have the resources to implement a case-specific auth implementation that'll be lightweight and efficient. In our case we felt kind of stupid; this should have been clear to us from the beginning, but we wanted to move quickly. Ultimately this cost us time, and to be frank, that was mostly on me!

- Integration with other systems wasn't always smooth or simple. The Firebase Admin documentation left a lot to be desired. There are a lot of quirks all over. Some fields during some authentications might be empty for example, but it wasn't clear why or what this meant - this meant a lot of deep-diving and experimentation. We were using the official Go library. Sometimes we could use the library, other times we'd need to write a request out to Google's APIs. We made a lot of passes to improve on this thinking we must be missing something, but after hitting various existing issues online where developers dismissed the problem, it became clear that this is just life for a Go server supporting Firebase Admin.

I do recall that Android and Node.js support appeared much better, so if you suspect you're using a better-supported ecosystem, maybe this won't get in your way.

- Something that a better developer might understand and navigate than I did was the lack of assurance of data being present or its structure being consistent. Fields coming back for users seemed slightly inconsistent (not a problem for most requests). I wrote a parser for each provider to normalize data because occasionally we'd be missing the user's email or something. For example, as I recall, getting the email from a Twitter-authed user could be different from getting the email for a Google-authed user. I'll admit we had other issues to face so I spent the least time on this, then we dropped Firebase before I could revisit.

- Like any off-the-shelf solution, it ends up having significant limitations. This can be a great thing too, but for us it was a deal breaker. You can assign metadata to auth profiles but this felt too flimsy to us. I think this is a general Firebase issue, not specific only to auth: data integrity is poor. It was an ever-present problem that we couldn't attach users to our other data with rock solid guarantees. It felt like auth was almost ephemeral in our stack, and without fully owning it, it was as though our users weren't the cornerstone of our application but a floating member out in space.

Despite all of that it's a great product and I highly recommend it if it fits your needs. I'm not slamming the developers behind it. I think they're well aware that it's not for everyone, and they've done a great job making it work for as many people as it does.


Also if you're writing capacitor 3 apps there's a badly supported open source library that doesn't work for social logins. Firebase should support a decent library but they don't.


I'd go with Supabase.


Thanks. I have been reading through the docs for a while now but I am not sure what to make of it.

What would be the minimum number of commands I have to type into a command line (Say in a fresh Debian install) to get a simple site up and running that lets users do the basic stuff like sign up, log in, log out, change password, delete profile?


There are some guides in the docs which cover Database + Auth + Storage. They take about 10 mins to work through. (For example, with React: https://supabase.io/docs/guides/with-react)

Auth:

    const { error } = await supabase.auth.signIn({ email })
Storage:

    let { error } = await supabase.storage
        .from('avatars')
        .upload(filePath, file)


You should look into Keycloak. It is an open source identity and access manager.


Thanks. For how long have you been using it?


Don't bother. Unless you're building enterprise apps that need to support OIDC it's completely the wrong product for making a static site a bit more interactive.


You shouldn't be rolling your own authentication. That's just asking for trouble.

I don't have experience with those frameworks, but the rails equivalent, devise, is very good. I assume it's the same for any mature framework.


Rolling his own doesn't mean custom algos. It means he will use the buildin language functions and/or external packages for that functionality while manually piping all of the pieces together.

He is not asking for trouble..


While you can definitely manually pipe all of the pieces together and have a functioning system, it's significantly easier, and likely more secure, to use the battle-tested auth library of a framework.

One could also use the builtin language functionality and external packages to write custom crypto, and I think we'd both agree that that's a bad idea. I'd argue that it's the same for authentication in general.


I understand that point of view and I agree for the majority of projects but relying on a fully working system that is coupled to some structure can be limiting if you need to make something slightly different.

I see a frameworks like laraval as the middle path where you can rely on the framework to handle auth but also can make any changes you may need.


> and noticed first-hand how needlessly complex and expensive existing services are

Can you give some examples?

Out of everything I would consider 'complex', handling media wouldn't even make the top 1,000 and services like AWS mean you can store petabytes for pennies.


Media is very difficult on the internet. It’s not enough to upload a video and share it, it needs to be transcoded into several different formats (h.264/mp4, vp-9/webm, h.265/hevc), multiple resolutions (from 2160p to 280p) and multiple bitrates (to support low/high bandwidth) as well as different framerates (60/30fps), all while retaining sync with an audio track that gets completely separate processing and compression.

None of this is a simple task, and you also need to serve the correct files to the correct clients - fun!


> you also need to serve the correct files to the correct clients

Thats been the hardest part for me as a developer. I've had several extended debugging sessions that ended up being "the media is encoded incorrectly for this android device".


> Can you give some examples?

Someone woke up one day with a $8,000 bill while encoding videos:

https://github.com/awslabs/video-on-demand-on-aws/issues/48

There's also this pretty architecture overview in the repo:

https://github.com/awslabs/video-on-demand-on-aws/blob/maste...

In my opinion, it's too complex and I'd prefer a more simple solution to get to market faster.

Imagine adding just a few lines of code to get started instead of setting up all these things on AWS, where you might end up paying up to $0.12/GB for outgoing bandwidth as well.


> In my opinion, it's too complex and I'd prefer a more simple solution to get to market faster.

Some folks absolutely need to build their own video encoding pipeline for one reason or another--but the happy path's pretty well-established for folks who just Need Some Video, IMO. At Mux (full disclosure: I'm on the DevEx team there), getting up and running with video is one API call to CreateAsset, followed by a HTTP PUT if your video file isn't already accessible via HTTP somewhere. IMO, hard to be simpler than that.

AWS is expensive, but you're right in that it doesn't have to be. Measuring like-for-like is difficult, and per-minute rather than per-GB pricing tends to make more sense for most developers, but for 1080p content Mux is usually around half the price of AWS IVS. Which, having done this before at a previous job--if you want to stay in all-AWS land, is a way better call for your sanity than trying to hack it out yourself.


If the user can upload anything they want, you usually need to have various measures in place to prevent abuse. First, ideally each user should have their own sub-domain (i.e. user_id.somedomain.com) so that if they upload malware your main domain doesn't end up on blacklists.

Secondly, you might need to scan the content somehow, again for malware and possibly allow other users to report it.

Of course there's the issue with copyrighted and illegal material - there should also be a way to report this, or detect it. I guess it means you need to be aware of and comply with the server country regulations which can be tricky in some countries.


Maybe ipfs and p2p is a good tool for this? Escalate through increasing degrees of content sharing and validation, from anonymous on up, and build templates off of existing ipfs hosts that already do moderation?


Storj DCS is a good option for storing and delivering user-generated content. It is globally availble - so you don't need to worry about AZ replication, and it's 1/10th the price of Amazon S3.

We were paying $100,000 a month storing user-generated content, and now are paying about $10,000 a month after migrating


Can you elaborate more on specifics, like what type of objects do you store on Storj, how's the performance?


What are you all doing to secure the user content? To ensure the content isn’t exfiled


mux.com (YC backed) is doing stripe for video already. Their API is really to use.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: