Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd love to see how something like this could handle the bad actor problem. It is what (IMO) is currently killing the web today.

How would you, for example, stop a rouge indexer from spewing an unlimited number of bad indexes to spam their garbage into the distributed protocol? Or how would you address just bad/misleading/faulty indexes?



Web of trust, I guess? Don't accept just anyone doing scraping/indexing. Keep the trust network human-scale. I can imagine a world where the relevant protocols are open and organizations choose their own roots of trust. Common defaults would likely emerge, things like the Wikimedia foundation or Archive would serve as default roots for your average user, but you could add your own or remove those if you knew what you were doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: