Hacker Newsnew | past | comments | ask | show | jobs | submit | coinfused's commentslogin

I like your site a lot!

David Sherret leaving as well

https://bsky.app/profile/dsherret.bsky.social/post/3mhbms5ph...

And Phil Hawksworth leaving as well

https://bsky.app/profile/philhawksworth.dev/post/3mhbamlutok...

Something is going on.


Thanks, I added them to the title.


37signals has open sourced theirs here

https://github.com/basecamp/policies

It hasn’t been updated in a while but they are a good starting point.


Those look pretty good, I read through the privacy policy and it covers just about everything I would think of.

Still, if it is for commercial use I'd pass it by the lawyers to make sure it has been vetted.


I got the same issue, it returns a HTTP 429 Too Many Requests


Why was the original comment flagged? They had a valid and relevant point.


Fun game! I could never quite clear the 0.0030 threshold. I wonder how much screen quality/calibration impacts it.


A lot. My scores:

- 0.0028 on my MacBook pro screen

- 0.0045 on my Dell monitor

- 0.0033 on my Pixel 10 pro

And those scores are pretty consistent.


Didn't know, feel free to delete


https://ameye.dev

I write about computer graphics, rendering and Unity! Sometimes other stuff


Interesting! I’ve been working on a tracker for the Belgian parliament [0]. Similar story: very old site + data mostly gets published as a weekly pdf report (including votes and discussions).

I have

1. A scraper/parser that ingests the data daily and transforms it into .parquet files.

2. A LLM summarizer to summarize larger discussions.

3. A static site that gets automatically generated (based on the .parquet files) to provide insight.

What makes your tracker ‘real-time’? Does it ingest the livestream of a parliamentary session while it is happening?

[0]: https://zijwerkenvooru.be/


Similar setup here with Zürich's city council (https://github.com/SiiiTschiii/zuerichratsinfo), we scrape daily, parse votes and auto-generate social media posts. Your Belgian tracker looks great, we should definitely exchange notes on the LLM summarization approach!


For things like my personal blog I don’t really need complex analytics, just page views is fine so I’m using goatcounter which has been really great so far. It has all I need and nothing more.

https://www.goatcounter.com/


Another one for personal sites is GoAccess. No DB needed, only log files. It shows nicely; which page was visited, how often, browser statistics etc. https://goaccess.io/


Nice! Never heard of goat counter before. What does it give you that log analysis with tools like goaccess or webalizer don’t?


https://www.goatcounter.com/help/logfile

> - There will be more bot requests.

> - Some data won’t be available: screen sizes, page titles.

> - It won’t disambiguate to canonical paths from <link rel="canonical">; i.e. /page and /page?x=y will show up as two different paths.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: