Not too long ago, I used to think that Markdown was the Bee‘s knees. But having been forced to write some documentation in plaintext, I learned that plaintext is significantly more readable than raw markdown.
I think one of Markdown‘s biggest sins is how it handles line breaks. Single line breaks being discarded in the output guarantees that your nicely formatted text will look worse when rendered. I understand there are use cases for this. But this and the „add a trailing space“ workaround are particularly terrible for code documentation.
> I think one of Markdown‘s biggest sins is how it handles line breaks. Single line breaks being discarded in the output guarantees that your nicely formatted text will look worse when rendered.
My experience has been the complete opposite. Markdown parsers that don’t discard single linebreaks (e.g. GitHub-flavored markdown) turn my nicely formatted text into a ragged mess of partially-filled lines. Or for narrow display widths, an alternating series of long and short lines.
Markdown parsers that correctly discard single linebreaks make sure that the source text (reflowed to fit a max number of characters per line) and the rendered text (reflowed to fit the display width per line) both look reasonable.
> But many modern artists challenge these long traditions, creating statues of figures that are fully clothed. Consider Thomas J. Price’s “Grounded in the Stars”: a 12-foot, monumental sculpture of a woman standing in heroic counterpoise, wearing a T-shirt, leggings and comfortable shoes!
Looking at that modern statue, I can‘t help but be bored. It doesn‘t draw my attention. I think that’s because it depicts a normal, everyday clothed person. We see those everyday. It‘s something mundane.
A naked statue is more interesting to me. It‘s less a depiction of a person and more of mankind in general. It has an abstract but intimate quality, inviting to reflect (wow that sounds posh).
Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
>
Programmers have enjoyed an occupation with solid stability and growing opportunities.
This is not the case:
- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").
- After the burst of the first dotcom bubble, a lot of programmers were unemployed.
- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.
Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.
Let me put it this way: I do have my opinion on this topic, but this whole topic is insanely multi-faceted, and some claims that I am rather certain about are more at the boundaru of the Overton window of HN, so I won't post it here.
But the article which the whole discussion is about
offers in my opinion a rather balanced perspective regarding using AI for coding (which does not mean that this article is near to my opinion).
I will just give some less controversial thoughts and advices concerning AI:
- A huge problem when discussing AI is that the whole topic is a hodgepodge of various very diverse topics.
- The (current) AI industry has invested a lot of marketing efforts to re-define what AI stood for in the past (it basically convinced the mass of people that "AI = what we are offering")
- I cannot say whether AI will be capable of replacing lots of people in office jobs or not (I have serious doubts). Media loves to disseminate this topic, but in my opinion it does not really matter: the agenda is rather to spread fear among employees to make them more obedient.
- Even if AI will be capable of replacing only few office workers (a scenario that I rather believe in), it does not mean that management will not use "AI"/"replace by AI" as a very convenient excuse to get rid of lots of employees. The dismissed workers will then mostly vent their spleen on the AI companies instead of the management; in other work: AI is a very convenient scapegoat for inconvenient management decisions.
And yes, I consider it to be possible that some event that leads to mass layoffs might happen in a few years (but this is speculative).
- While I cannot say how much quality improvement is possible for current AI models (i.e. I don't know whether there exists a technological barrier), the signs are clear that as of today AI companies have hit some soft "cost barriers". I don't know whether these are easily solvable or not, but be aware of their existence.
- So, my advice is: if an AI model is of use for some project that you have (e.g. generating graphics/content for your web platform; using it as a tool for developing the next scientific breakthrough; ...), do it now. Don't assume that the models will do this nearly freely for you anymore in the future (it can be that this will stay possible in the possible, but be cautious).
Endgame is to produce AI which will not need any supervision by the time the current generation of experienced developers will retire or even sooner. I don’t know if it will happen but many bet on this and models are still improving, flattening is not yet seen.
This implies programming is done and there will be no other advancements.
And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?
Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs.
I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.
[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]
For a brief blip in time the last few years it was possible to jump from a code camp to a decent paying job and vaguely disappear for a while like Milton from office space. The current period from a bad economy is more of a reversion to the mean.
You can. That progression is normal. I know this because I am such a case. I wasn’t able to produce a single sound on pitch. Now I can nail some songs (as long as they don‘t go crazy on technique).
Learning to sing is taking control of your voice. You use the same biology that you have been using for speech and other vocal sounds since birth. It all comes built in. Of course it comes more naturally to some people, just like any other activity.
There are some decent videos on YouTube, but take actual vocal lessons if you can. Videos are not a substitute for lessons.
I don‘t like the posted page. The descriptions aren‘t very helpful and neither are most videos on YouTube. I know from experience. For a complete beginner, this is frankly a useless resource.
Because it is so easy to get lost in the muck, do you have any particular recommendations on some “decent” YouTube videos/channels to get at least some practice before taking lessons with a vocal coach?
The channel I've watched the most videos on is Chris Liepe's. The video "STOP Singing Vowels this way! (its making you tense)" was in my singing playlist:
Oh and I forgot: I can play some instruments, but the voice is the cruelest one to learn. You can‘t „see“ what you are actually doing (wrong). And most of the time you can‘t even feel it very well. This why vocal training is full of analogies and imagery.
I wish it weren't. I would have gotten a lot more mileage out of "force a yawn, see what your mouth does, and do that" rather than "more space, more space, open up!".
Their work includes pedagogical research to develop a consistent terminology which abandons lots of outdated and confusing terms such as you mention. No more ambiguous words like "project" or "space" or "support".
Their research also includes using endosciopic cameras to directly observe the vocal tracts of professional singers.
I've not actually trained with them, I just like their research and approach.
Seconding both points. I'm not one of those cases, as I could already sing decently, but I've seen people go from "terrible" to singing professionally.
I also agree that the linked page isn't useful, it's more of a glossary than anything, but then again, I'm not convinced that a distinction between head voice and chest voice actually exists. I've never been able to tell any qualitative difference, as opposed to, for example, falsetto, and the community can't really agree on whether they actually are a thing or not.
I see a lot of people in here posting success stories from lessons, which is great. But I tried lessons for about 2 months and go absolutely nowhere haha. It was just repeatedly practicing some song that I wasn't super into and I never even felt like I was "singing" just talking kind of louder / longer and felt very forced and odd. Terrible experience tbh, but I do love singing and still want to some day. (I generally just sing in falsetto to songs in my car because I'm too timid to really project my actual voice)
It sounds like you didn't have a very good coach. My first coach wasn't very helpful, my second was amazing. Keep looking!
Open mic nights at your local bar are a great source of data. Approach people after their performance, compliment them, and ask them if they have a coach they'd be willing recommend.
All of them, through software implementation, as assembly programmers have done since forever.
You simply choose the integer type that your problem or task requires. If the hardware genuinely can‘t cope with it (performance), you reevaluate your requirements, define new constraints and choose new types. This is basic requirements engineering, which C only made more convoluted.
I think overall it's better to provide "natural" default types but also have had something like stdint.h. But then again mandatong even a stdint.h that early would have made writing implementations quite difficult. I think it was and is an alright tradeoff.
How will you know if your integer type is adequate for the problem at hand, if you don‘t know its size?
Choosing the right type is a function of signedness, upper/lower bound (number of things), and sometimes alignment. These are fundamental properties of the problem domain. Guesstimating is simply not doing the required work.
C specifies minimum sizes. That's all you need 99% of the time. I'm always annoyed by the people who assume int is 32-bits. You can't assume that in portable code. Use long or ensure the code works with 16-bit ints. That is how the type system was meant to be used. int was supposed to reflect the natural word size of the machine so you could work with the optimum integral type across mismatched platforms. 64-bit platforms have mostly abandoned that idea and 8-bit never got to participate but the principle is embedded in the language.
> The programmer should rather prescribe intent and shouldn't constantly think about what size this should exactly have.
You still have to constantly think about size! Except now you have to think about _minimum_ size, and possibly use a too big data type because the correctly sized one for your platform had a guaranteed minimum size that's too small for what you want to do.
It does agree with what I intended to say. The values a type needs to be able to represent are very much part of the intent of a variable. What the programmer doesn't need to specify, is with what bit pattern and what exact bits these values are going to be represented. There are use cases where you in fact do want to do that, but then that implies that you actually care about the wrapping semantics and are going to manipulate bit patterns.
The idea is mostly that we shouldn't worry. The user of the lib on an arduino will feed it arduino sized problems and the amd64 user will likewise larger problems. Again I think just think of the transition from 32 to 64 bit. Most ranges are "input"/"user " dependent and it would have been needlessly messy to have to even with automatic conversion help rewrite say every i32 to i64 or which ones to convert.
As I said, today when it really matters we can use stdint. But I feel it would have been too burdensome to mandate on the standard in the early days of C.
Like fucking what? If you do any TS.. you just use unsigned whatever fastest type you have on targeting platform, and you do NOT care. 16bit? wrapping at 1 min? Thats eternity even on 2MHz 6502... You just count and substract to get difference. Usually you are <1000ms range.. so wrapping is not a problem..
If you target about 32bit and 16bit, you either think about and using long (on 16 bit is more costly) or you just count seconds.. Or ticks.. or whatever you need.
I'm not writing the app. The app was written according to your preferred design and I'm compiling it for Arduino. You say to just use int because it always has enough bits, then you say to sometimes use long because int might not have enough bits.
I dont know the indented use. If you need delay or difference, 16bit is more than enough. If you writting generic clock w/ ms accuracy, it will be not enough.
You either split it or use larger storage. Its not rocket science...
When you choose unsigned int for that type, you compare against UINT_MAX and return an EOVERFLOW, ENOMEM, EOUT_OF_RANGE or whatever you like when someone sets a timer greater than that. Or you choose another type, i.e. unsigned long which is guaranteed to take values to at least 4294967295. I happen to program for the Arduino platform recently and milliseconds in the Arduino API are typed unsigned long.
If your decision is that you really want to store all 32-bit values then you use uint_least32_t or uint_fast32_t depending if you are also resource constrained.
Not too long ago, I used to think that Markdown was the Bee‘s knees. But having been forced to write some documentation in plaintext, I learned that plaintext is significantly more readable than raw markdown.
I think one of Markdown‘s biggest sins is how it handles line breaks. Single line breaks being discarded in the output guarantees that your nicely formatted text will look worse when rendered. I understand there are use cases for this. But this and the „add a trailing space“ workaround are particularly terrible for code documentation.
reply