I hate ads. I use an ad-blocker, I've abandoned Chrome so I can effectively block ads on Youtube, and I avoid ad riddled services like television (ad-free streaming or piracy for me, thanks).
I propose people promote their products on their website and at their place of business. I don't want anyone trying to sell me things. If I need something I go and research it to figure out what my options are and which one I want.
I genuinely believe the world would be a better place with severely limited advertising because a lot of really terrible things are driven by ad revenue: social media and 24 hour news are my go to examples. Sure, broadcast television and radio would also die, but at this point I don't think we're losing much. And sure, content creators would lose out on ad revenue, but the vast majority already make very little in ad revenue and have found other ways to get funding.
The underlying problem is that businesses that rely on ad revenue are incentivized to hold people's attention as long as they can while showing as many ads as they can. Producing a quality product takes a back seat to misleading, emotionally charged, and addictive content that's designed to maximize engagement.
The paradigms from different programming languages aren't always compatible. The differences in those paradigms are the strengths and weaknesses that make languages better or worse for various applications. It's hard for me to imagine low-level plumbing that would give you the languages' strengths when mixing languages -- some mixes just don't make sense.
As an example... why would I want to call python from C? If I'm writing in C, its because I want high performance with low-level control of resources. To call a python function, I suddenly need to spin up the python virtual machine including its baggage like its garbage collector -- there goes my performance and low-level control, I'm suddenly running a massive stack that I didn't write!
Alternately, you could compile the python code into a bytecode (different than PVM bytecode) that C can call performantly... but to do that you lose the benefits of python. Suddenly your python code needs to be compiled after every change (slowing down development, one of Python's greatest benefits) and you're required to declare and enforce strict types (no more easy duck typing!) so that C can count on getting back data in the expected format.
In contrast, calling C from Python does make sense and can already be done.
And writing plumbing to handle generic interoperability between 2 languages would be a lot of work. Writing parts of a project or system in different languages and then having them communicate through a standard interface like an API is much less work.
Huh... I never thought about it like that. I had no interest in using OpenClaw, but my coworkers are extremely dumb and inconsistent. OpenClaw might be comparable.
If you pursue this, be aware that consumers are tired of AI and don't believe it works well. You need to show that it's "insights" are reliable, accurate, and useful.
If you're stuck in traffic, stressed out because you're running late, do you think it's helpful to have a notification on your phone pop up and tell you "Stress slightly elevated this afternoon"? Do you think the AI could suggest solutions that the user won't be upset to see ("Try to relax with a breathing exercise...")?
If you ask it "Why do I feel tired today?" do you think it's helpful to get a chatGPT response listing bullet point reasons for people commonly being tired? You already know if you didn't get enough sleep, slept poorly, are burnt out, skipped a meal, haven't been exercising regularly, are recovering from a recent workout, are recovering from illness, or haven't been drinking enough water. Can the data collected actually identify a specific cause? Can the AI then suggest a specific, actionable solution?
> A good example among analog controllers is the Atari one that had a variable capacitor and the capacitance was measured to infer its position. Although the measurement is digital, the controller, yes, was analog.
An abacus allows you to slide beads along a rod. Similar to your example of an analog controller, the device itself is analog but the measurement is digital. The traditional way to use an abacus is to slide beads from one end to the other with beads on one end counting as 1-5 (or multiples of 5, or so on). But you don't have to use it like that. You could use just 1 bead on each rod to represent a value from 0 to 5 with its position along the rod, or even a value from 0 to 100 with its position. Heck, you could use two beads on the rod to represent a range using their positions. Using this logic, an abacus is analog but the traditional way of interpreting them is digital.
One could argue that fingers are analog in the same way as abacus beads or electronic signal voltages in "digital" circuits. Yes, the traditional way to count on your fingers is to count each finger held up as a value of 1 and then add up the number of fingers to get the represented value, but fingers can be anywhere between entirely up and entirely down. You could hold a finger halfway up and count it as 0.5.
If you feel that argument falls under remark 3, I think you have some options:
1) Resolve the conflict between your example of an analog controller in remark 2 and your refusal to consider interpreting an analog signal/state as a discrete value as digital (as expressed in remark 3).
2) Accept that trying to fit everything in the real world into strict definitions is a fool's errand. Definitions are essentially simplified models that allow us to represent some aspect of the real world, but they can never entirely encapsulate the nature of the real world (the map is not the territory).
Let's stick with option 1 because it's more practical than philosophical (although, exploring option 2 may help you cope with life better in the long run). You can go with option 1 by simply dropping remark 3 entirely and accepting that a device can be analog in its physical form and digital in an interpretation.
Alternately, you can accept that the context affects which model best describes an abacus/fingers/electronic signal because your interpretation defines what the values represent. That is, the abacus has no representation of "internal values" -- it doesn't care if the beads are supposed to be 1's, 5's, fractions, or space ships. What each bead represents lies entirely in the person looking at the beads, not the abacus.
An example in that line of thought: the computer engineer designing a chip has to face the reality that electronic signals are analogue so he can design a chip that functions properly. In his context of work, the chip is analogue. The software engineer who uses that chip needs to know very little about the underlying hardware and is able to model its behavior as entirely digital. In his context of work, the chip is digital.
I'm not going to argue against engineers using AI coding tools to write boilerplate code faster. I certainly think it's a useful tool for that.
But outside of that context, it's problematic to argue that "you can't tell if something was created by AI just by looking at it. And if you can't tell the difference, then the difference doesn't matter."
It feels like we aren't too far away from AI being indistinguishably good at other things. Actors would obviously be upset if you started producing movies with their likeness without paying them (and without them shooting a single scene). Screen writers, voice actors, authors, and artists would be similarly upset. Fans have already rallied against video game studios that try to use AI to replace artists.
I certainly think the "if you can't tell the difference, then the difference doesn't matter" test is problematic when you look at video shared with news stories.
So what makes writing code different? Is it because consumers of movies, television, books, and art care if AI took a job away while consumers of code don't? Is it because people who write code don't really care about writing boilerplate and just want to get past that to bigger, better things? Is it because a lot of people writing code don't like it at all and only got into it for the money?
I don't think the knee-jerk reaction to reject all AI generated content is misplaced. AI raises real questions and creates real problems that we need to address instead of simply dismiss because writing CRUD is boring.
> But outside of that context, it's problematic to argue that "you can't tell if something was created by AI just by looking at it. And if you can't tell the difference, then the difference doesn't matter."
I agree wholeheartedly. This argument is just "the ends justify the means" in different words. Sadly, there are far too many people who actually think that's true.
I certainly expect Tesla to use the cameras on their cars for similar purposes if they haven't already. Although I would expect them to distance themselves from it by selling the location data 'in aggregate' to another company that interfaces with law enforcement agencies.
It's been overshadowed by Python which has a sexier image because it isn't associated with Microsoft. It certainly doesn't help that in the beginning C# was a Windows only language tied to the .NET framework. It's taken a decade for word to get out that it has evolved past that.
From just trying things randomly, I think the objective is to get each numbered square to 'claim' or 'path into' as many empty squares as the number shows. That is, if you see a red square that shows a 3 on it, you need to click on that red square, then click on an adjacent square to 'claim' it for that red square, then click on a square adjacent to the one you just clicked to 'claim' it as well, and so on.
Your options, from worst to best:
- add a tutorial (everybody hates tutorials)
- add some concise text to the bottom of every page that explains the objective and how to play
- find a theme that makes the objective and how to play intuitive
Other feedbacK:
- You absolutely need to show which square is selected (if any).
- There's either a bug or some condition for play that I don't understand, but you can't try to occupy a square more than once even if the square is empty because a previous attempt to occupy it was canceled/reverted.
Ah, its a bug, thanks for pointing that out (you should be able to path to a square that was previously pathed but is currently empty), working on it now.
Yes, in this game you lay "pipes" from "depots", and must fill the entire grid with pipes.
Thanks for the feedback, adding some text with the basic rules now.
I initially had the first "tutorial" level a 4x4 grid with a [3] in each corner, but the 4x4 levels were trivial and I removed them.
I propose people promote their products on their website and at their place of business. I don't want anyone trying to sell me things. If I need something I go and research it to figure out what my options are and which one I want.
I genuinely believe the world would be a better place with severely limited advertising because a lot of really terrible things are driven by ad revenue: social media and 24 hour news are my go to examples. Sure, broadcast television and radio would also die, but at this point I don't think we're losing much. And sure, content creators would lose out on ad revenue, but the vast majority already make very little in ad revenue and have found other ways to get funding.
The underlying problem is that businesses that rely on ad revenue are incentivized to hold people's attention as long as they can while showing as many ads as they can. Producing a quality product takes a back seat to misleading, emotionally charged, and addictive content that's designed to maximize engagement.
reply