It's a good thing Zuck stopped firing the bottom 5% of performers because he would be in that group every time. His remarks in this case are atrocious and are rehearsed lies. Let's hope his performance in this case is better than the Metaverse, but I doubt it.
Thank you for sharing this. I think it's important to take a step back and consider the real existential threats that humanity faces like climate change. I am hoping for similar evidence of the doom from AI, but I have a feeling that the AI doomers have ulterior motives.
I think people hate AI generated writing more than they like human curated writing. At the same time, I find that people like AI content more than my writing. I write, comment, and blog in many different places and I notice that my AI generated content does much better in terms of engagement. I'm not a writer, I code, so it might be that my writing is not professional. Whereas my code-by-hand still edges out against AI.
We need to value human content more. I find that many real people eventually get banned while the bots are always forced to follow rules. The Dead Internet hypothesis sounds more inevitable under these conditions.
Indeed we all now have a neuron that fires every time we sense AI content. However, maybe we need to train another neuron that activates when content is genuine.
How do you know if your engagement was by real humans or not? I'd also assume bot traffic is way more accepted on platforms like Facebook, Instagram, and Twitter. Especially any Meta owned platform, they have a history of lying to people about numbers and were never punished for it:
It was always about money. People never cared about new things or innovation. It was always funneled through gatekeepers. The "moat" was a talking point to VC's who are used to 90% of their ventures failing and yet was taken seriously. We seem to forget all the racist jokes about how China will just make a cheaper copy, where were talks about a "moat" then?
Congratulations to the Waymo team! I was excited reading through the overview. What I'm monitoring in the self-driving space is Lidar and Radar integration. There are leaders like Elon with Tesla who insist that they're not necessary yet Waymo has devoted effort for them in their 6th generation. My personal perspective is that Lidar and Radar are a core requirement for self-driving because "These upgrades help the lidar penetrate weather and avoid point cloud distortion near highly reflective signs, expanding the Waymo Driver's ability to see through heavy roadspray on freeways and other complex edge cases."
I stand by this pledge. I even have a Clicks keyboard to avoid the iPhone one. I have an interesting hypothesis as to why, and it's counterintuitive. The larger the screen gets, the less accurate a touchscreen keyboard is. I picked up an original iPhone and started typing and it was outstanding how accurately and quickly I did.
Let's take an exaggerated example. Surely, a touchscreen keyboard the size of a flatscreen TV is too large. Maybe even the size of a regular computer monitor. So where is the happy spot, and why? I think it's because of our manual error-correction and the software error-correction. On the smaller iPhone keyboard, if I make a mistake, it's obvious and I click the backspace key. There's much less software error-correction on a smaller screen because of a smaller room for error per key. On larger screens, I find that if I touch a key at a certain angle, it will register an adjacent key through the software. I also find that my fingers have to travel farther, and that increases the rate of errors. Not only that, the obsession with decreasing bezel size requires me to hold the phone in weird ways so it doesn't register a swipe from the sides.
Personally, the iPhone 6 was peak iPhone. I find that the obsession with decreasing bezel size is also compulsive because it significantly increases miss-swipes and introduces weird work-arounds like the "notch", "island", or hidden sensors. The flat screen also made the keyboard desirable. It was also slow enough so that the surveillance from the autocorrect wasn't useful but fast enough for everything else.
There were some valid contributions and other things that needed improvement. However, the maintainer enforced a blanket ban on contributions from AI. There's some rationalizing such as tagging it as a "good first issue" but matplotlib isn't serious about outreach for new contributors.
It seems like YCombinator is firmly on the side of the maintainer, and I respect that, even though my opinion is different. It signals the disturbing hesitancy of AI adoption among the tech elite and their hypocritical nature. They're playing a game of who can hide their AI usage the best, and everyone being honest won't be allowed past their gates.
Removing jQuery is a great task and one I hope to implement in some of my JavaScript code bases. Thank you for this post. I don't know exactly why but I've found these agents to be less useful when it's counterintuitive from popular coding methods. Although there are many reasons why replacing jQuery is a great idea, coding agents may fail on this because so much of their training data requires jQuery. For example, many top comments on StackOverflow utilize jQuery, perhaps to address the same logic you are trying to replace.
When I was just beginning, all of the productivity measures would be 0 and I felt like a failure. The most attainable was lines of code. Currently, it's not a great measure of productivity as I'm achieving more advanced tasks. I've heard so many opinions about how LOC isn't a great measure and then the same people get to trample on all of the work I've done out of spite because I've written more code than them. I think LOC is great because productivity measures are for beginners and people who don't understand code. The audience doesn't know the difference between writing a hundred or thousands of lines of code, both are trophies for them.
These metrics for advanced roles are not applicable, no matter what you come up with. But even lines of code are good enough to see progress from a blank slate. Every developer or advanced AI agent must be judged on a case by case basis.
But if removing 1kLOC and replacing them with 25 for a much better function, there is a -975 LOC report. Does this count as negative productivity?
Having brackets start and stop on their own lines could double LOC counts but would that actually improve the code base? Be a sign of doubling productivity?
The OpenBSD project prides itself on producing very secure, bug free software, and they largely trend towards as low of lines of codes as they can possibly get away with while maintaining readability (so no codegolf tricks for the most part). I would rather we write secure bug free software than speed up the ability to output 10kLOC. The typing out code isn’t the difficult part in that scenario.
No one judges a painting by the amount of paint, or a wooden chair by the number of nails in it. The amount of LoC doesn’t matter. What matters is that the code is bug-free, readable, and maintainable.
But reducing the amount of LoC helps, just like using the correct word helps in writing text. That’s the craft part of software engineering, having the taste to write clear and good code.
And just like writing and any other craft, the best way to acquire such taste is to study others works.
reply