Hacker Newsnew | past | comments | ask | show | jobs | submit | superJimmy64's commentslogin

>(You don’t have to drive creative folk like most workers. They drive themselves. Just wind ’em up and let ’em go.)

Great link: Very, VERY insightful.


Agreed. What an excellent read! Highly recommended.


(Not that it really matters, but I'm 27...) A few things came to me after reading the post:

Firstly, how this situation shared by the OP and many others is pure insanity. That the very people who grew up with this technology during the baby years are now struggling to have a place now that it really has taken off.

Secondly, for those in their mid-30's and onward, to realize how immensely skilled they likely are at writing (if not already recognizing this fact). One of my favorite parts of staying up to date in this industry is getting to read every and any type of work/post/article from the older guys (still must admit that 37 doesn't feel old to me). Because your time was spent communicating primarily electronically, this skill has spilled over into creative writing and all other forms, which makes for incredibly well-written pieces which keep me going to this very day. So thank you for that.

Thirdly, how amazing it is that a person can now come up with an idea, build out the details and launch the website within 24 hours if truly determined. Loved the website, definitely think that as younger generations start to take advantage of the current tech and build their own on top of it, that there will be a need to differentiate between the various abilities/experience of devs.

Nice work.


I'm 41. Must make me ancient. :)

I completely agree with the OP. About 10 years ago I created a very focused professional blog to improve my writing even though I'd already written an O'Reilly book, just as an excuse to hone my skills.

I just created a blog series about building your own language in JS, just because I've always wanted to do it. There's definitely freedom that comes with age. (and goes away with small children)

https://www.pubnub.com/blog/2016-09-26-4-tutorials-create-la...

I've found that some good engineers find a role as developer evangelists, using both their technical and writing skills. Maybe check out DevRelCon.

http://devrel.net


>(still must admit that 37 doesn't feel old to me)

Same; I find it a bit terrifying that 35+ is considered 'old' in our industry...


I'm 41 and I still don't feel old even though I work with a few people who are 10-15 younger than me.

We tend to think of old programmers as these ancient neckbeards who write COBOL and FORTRAN and grew up on usenet programming PDP machines; but the reality is that 40+ year old programmers today were kids in the 90s, were in their 20s when languages like JS, Python and PHP became popular, and played quake 3 in their early twenties.

BTW I'm not doing straight-up programmer work, more of senior architect type roles in the past years, so it might be more "acceptable" in the industry to still be spending time with an IDE at my age in this context.


It's not, but he has to call himself old to give his business idea credibility. (which I think is fair, by the way)


I dont think its as much a business-motivated thought, as it is an observation about there being a very distorted age spectrum associated with working in tech-related industries.


I'm starting to think that the way things worked is that older people realize that the best way to get ahead is to exploit younger inexperienced workers who don't yet know their own value and power, whether it's as employees or through being a landlord rentier.

That doesn't work so well anymore when the young can build their own web business without any of the contacts and wisdom required before. My generation (x) are going to have to keep working a lot longer.


And so is theirs since there is only so much space for successful businesses and so many of them fail. It's even worse because small, nimble companies can do the jobs of many companies now so you have to be even better to succeed and stay on top.


Thank you so much! Made my day with this comment, more than the upvotes even.


I'll vouch for this as well. A cool test: a few hours into the later parts of a long night when your working on something, try removing flux. I was blown away by how clearly this app should be a necessity for all devs and possibly techies


I have Flux on a 1 hour shift, so I don't notice the colors changing as it happens, but it absolutely makes a huge difference. Disabling it at 10pm after a few hours of 'flux-ed' viewing feels like staring directly into the sun.


I literally just started to go back and re-learn some of the core concepts for the latest version... wow is Swift ugly!

EDIT: I'm not attempting to discredit the greatness that can be had... Just wanted to describe a feeling which I had while using it is all!


I don't see how your second sentence follows from the first one? It's true that Swift is heavily type safe but that makes it appealing to me. I actually really enjoy writing Swift.

Would you rather it were dynamic so you could write less code or something?


I should have specified that it wasnt an attempt to put down the language, as there are many, MANY caveats... only wanted to point out how writing it almost makes me feel unclean somehow hah


It's a systems language, and imo way more beautiful than c++.


Absolutely. I'm not attempting to put it down as somehow not worthy of anyones time, just thought I'd mention how "cushioned" it all felt :P


What language do you prefer over Swift? I'm curious.


C++11 with review-enforced RAII. Boost if you need special stuff involving odd data structures. gtest/gmock. Depending on what you're doing, add some Qt5, OpenCV, OpenMP, RapidXML/RapidJSON, any C library ever written - and there are BSD/LGPL libraries for pretty much anything. Make plugins with Lua. Run tests in Valgrind.


I'm not sure I would choose something over the latest version, primarily because of how little a time investment using it can be... but then again it all comes down to what the task at hand is doesn't it ;P


Yeah but it can't replace C++. Swift was designed to make programming fun/nice not to replace C or C++.


"Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). "

Taken from https://swift.org/about/

"Swift is a successor to both the C and Objective-C languages."

Taken from https://developer.apple.com/swift/


Swift can't do realtime. It has automatic memory management. How do you replace C or C++ with Swift for that? What they mean is that Swift replaces Objective-C on user facing apps(on Apple devices) but I doubt you will see the critical parts of the OS (kernel, drivers, etc) written in Swift.

It's almost like saying javascript will replace C/C++. I want a nice language like Swift (or Go) to replace C but there is none. Rust was the best/latest failed attempt.


You can essentially write C in Swift by using UnsafePointer and such. If you avoid using reference types then you won't get any automatic memory management. I don't think it's quite ready to replace C as a kernel or driver language, but it's not that far off.


Is there any open source library that does that? I know in theory many things are possible but it would be great to actually test it/see it in action. I never thought of Swift as a language with zero cost abstractions and deterministic performance. If it works I think it really gives it a leg up.


It's pretty uncommon, so I'm not sure. If I may toot my own horn a bit, you may find the code to my memory dumper interesting:

https://github.com/mikeash/memorydumper2

I also gave a talk on this subject a couple of years ago. It's out of date by now, but the basics are still more or less intact:

https://vimeo.com/107707576

Or peruse the standard library documentation for UnsafePointer, UnsafeRawPointer, UnsafeMutablePointer, and UnsafeRawMutablePointer. These are built-in types that the compiler knows about, and code that uses them should compile down to essentially the same stuff as C pointer code would.


This is especially true in Swift 3, which has many improvements around this type of programming.


> Swift can't do realtime. It has automatic memory management. How do you replace C or C++ with Swift for that?

Of course it can, it is just a matter of OS vendors caring to push it down developers throats.

There are ways to manually manage memory if required to do so, but then you suffer the consequences of dealing with Unsafe*.


Those are the claims. The reality does not match those claims.


Not, yet.

That is how systems programming languages get adopted, when OS vendors do "my way or the highway".


Agreed. Certainly if the dev has any regard for things like performance and security


It's not really predictable enough for a systems language.


Can you predict what your C compiler with going to do with your code, taking into account UB and compiler specific implementation behaviors not specified by ANSI C?


More than with Swift. I routinely see 100x - 1000x and more performance difference between -O0 and -O in Swift. Considering that the optimiser doesn't give warnings or errors if it can't apply optimisations, that's out of bounds for me for a systems programming language. YMMV.

The whole UB idiocy is a different matter, though related because it's perpetrated by roughly the same group of people, for similar nonsensical and non-validated reasons. See my post http://blog.metaobject.com/2014/04/cc-osmartass.html

See also: http://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_201... and Proebsting's Law.


> More than with Swift. I routinely see 100x - 1000x and more performance difference between -O0 and -O in Swift. Considering that the optimiser doesn't give warnings or errors if it can't apply optimisations, that's out of bounds for me for a systems programming language. YMMV.

Improvements required for the toolchain in a young language, than Apple, IBM and others will certainly improve.


There are obvious exceptions however I have come to understand the greatness that is SO. All that is needed is a well-thought question with a little bit of work to show on the side.

I have a theory about why many complain about SO (please don't comment about this line, there are obviously exceptions):

There has been a ridiculous sense of entitlement with the growth and recent appeal of tech jobs in the past 5 years.

All this crap about "trolling" getting out of hand, not enough diversity (THE FIELD WAS PRIMARILY FILLED WITH NERDS OFTEN LACKING ANY SOCIAL SKILLS, no one else wanted to look or hang out with "that guy who is good with computers") etc.

It's a field that was mainly driven out of the desire and enjoyment that would be had messing around on CPU's. Therefore most of the good ones (among the diluted masses of "experts" nowadays) spent a great deal of time on these things. I'm not surprised that somebody would get pissed off if another came around and started asking for the answers to things without any real effort or drive being shown.

SO will forever be a poor resource for the huge incoming population of coders.


> (A programmer does not primarily write code; rather, he primarily writes to another programmer about his problem solution.)

An obvious concept that can so easily be forgotten at times.


Not so obvious. I have a couple problems with this statement.

The first is that it confuses programmer (as in "coder") and developer (as in "designer"). But maybe that distinction didn't exist yet. To me, a programmer/coder is actually a translator - they translate the solution expressed in the specification of a program into a computer language (which is typically itself translated to machine code by a compiler). In reality, gaps in the specification or impedance mismatches between the specification language and the computer language can "force" a programmer to become a designer. In a way, one can still see this as a translator having to translate an idiomatic expression.

The second problem with this statement is the same as with "readability": it is very subjective. That's why another claim in this text, that incomprehensible code written by bad programmers dies early, has been proven false in practice. Readability and viability are in practice separate concerns. For instance, highly optimized code is often less comprehensible than naive code yet it is more viable because the extra efficiency is required. Furthermore, human to human communication using a computer language is more or less efficient depending on the distance between then computer language and natural language; as an extreme example, it would be terrible to communicate using assembly language. So the H2H communication problem is better solved with documentation. That is, comments if we speak strictly about computer code.

Now the question is: if comments solves the human to human communication problem, why would one has to write more comprehensible/readable code? To me the answer is: you don't write comprehensible code in order to better communicate with your fellow programmers, you write comprehensible code because you write the simplest possible code. It is comprehensible because it is simple, not because of the additional requirement that it has to be understood by someone else.


Is there really 2 roles there? Who wants to design but not code or code but not design? I can't really imagine how you could even do either well without doing the other.


It sort of reminds of mechanical engineering, where engineers sketch out the problem solution, do the calculations, and make sure the solution works. Then, they supervise drafters or technologists to produce the actual final product of engineering, drawings (2D/3D, paper/CAD, whatever).

Of course, that's in larger orgs. Smaller places will typically have an engineer doing everything.


Of course, there is overlap there. Technician->engineer used to be a fairly common career path.


You apparently have not encountered the "architecture astronaut" archetype[1]. Your final sentence stands; usually they can't.

[1] http://www.joelonsoftware.com/articles/fog0000000018.html


The issue with "architecture astronauts" is that most problems in software just aren't complex enough to warrant formal design. The core failing of these people is that they impose unnecessary design complexity in order to force the design to be complex enough to warrant formal design.

If the problem is inherently complex enough, particularly if it is multidisciplined, than it is the correct approach.


High-assurance sub-fields have many examples where the code is easiest part as it's just implementing a verified spec. Others have plenty of work on both sides. Altran (formerly Praxis) is a nice example with many commercial successes:

http://sdg.csail.mit.edu/6.894/dnjPapers/hall-correctness-by...


So called software architects seem to tend towards design only.


It's a "turtles all the way down" issue, though.

Software architects do the macro design, but they need to do that from an understanding of how it will be implemented, and the various lower level tradeoffs like consideration of the memory constraints of the target platforms.

In order to implement that design the non-architect programmers need to micro-design their bits that they build, too. And those smaller bits also often need abstractions/mini-architectures that aren't part of the macro architecture.


See proper engineering and the distinction between an engineer, a technician, and an assembler. There is overlap, of course, but designing and producing the product is a collaboration between all three. The engineer is primarily responsible for designing the system, the technician for helping the engineer translate the design into a realized system, and the assembler for actually realizing the system.


Well, but it's a bit weird with software. The "realized system" typically comes out of a compiler. When designing the system, the computer can often take your design and turn it into a realized thing.

In fact, we even have software called an assembler :)

Of course, it all depends on what level you're doing design at, and what level your language(s) work at. But the point is that those 3 roles aren't necessarily are fulfilled by humans when working with software.


I didn't say they necessarily were all fulfilled by humans. My point is that they are not necessarily all fulfilled by one human. Apropos of the comment I made in response to the "architecture astronauts", the complexity of most software projects is such that the design, from a technical standpoint, is a pretty small portion of the total effort required. In that case it makes sense for the technician to be the designer as well, which is something that often happens in the trades on simple jobs.

That does not mean that there can't or shouldn't be a distinction between the engineer and the technician on sufficiently complex projects. That is, they don't need to be filled by people who could perform either role, as the original comment I was replying to implied. That point was either miscommunicated in my original post or rubbed some people the wrong way.


The thing is that engineers produce and maintain documents. Specifically they produce source documents from which the end product has been derived.

That's exactly what a coder does.

There are some fields (draftsman, typist, others?) of people who created fine documents based on vague documents from engineers, architects etc. So in principle you could have a similar distinction between software archtects and coders. But it is exactly those document-gruntwork fields which have been transformed most by computers.


Are organizations still trying to move in that direction when engineering software products? I know it's been tried for decades now. But my limited experiences in organizations that tried it have not been very good. I'm curious if any organizations have made it work.


At the risk of drawing accusations of using the "No True Scotsman" fallacy, I'd say that most attempts have been trying to force-fit that type of style on projects that don't warrant it. That happens in all areas of engineering, though. The specific problem in software, IMO, is that too many projects are considered software "engineering" that shouldn't be, because they just aren't complex enough.


> To me the answer is: you don't write comprehensible code in order to better communicate with your fellow programmers, you write comprehensible code because you write the simplest possible code.

This is covered in the essay.

> It would appear all good programmers attempt to do this, whether they recognize it or not.

By writing the simplest code possible you are working on communicating with other programmers (including your future self).


> By writing the simplest code possible you are working on communicating with other programmers (including your future self).

Yes, the computer really doesn't care if your code is simple.


"Now the question is: if comments solves the human to human communication problem, why would one has to write more comprehensible/readable code? To me the answer is: you don't write comprehensible code in order to better communicate with your fellow programmers"

I strongly disagree with that statement. coders spend more time reading code than coding it. being fast & bug free is important, but its not the only reason to write readable code, and simple is only one proxy for readable code. also important is following standards, easy to follow flow, choosing good variable names, etc.

sometimes a day for me is spending 6 hours reading code, and then maybe adding or changing one line.


> simple is only one proxy for readable code

No, simple isn't a proxy for readable code. One can write in a readable manner a complex solution to a problem, but it isn't always right (using other "proxies" isn't, either).

It isn't always right because it's "a complex solution to a problem", not "a complex solution to a complex problem". What you made readable might even be "an overly complex solution to an actually simple problem"; meaning - to put it bluntly - that you have inadvertently spent your precious time on putting lipstick on a pig.


I didnt say simple was the only proxy, just that it is one. if you are at odds with that, then take it up with pretty much all the greatest minds in computer science.


This is a ridiculous product... reminds me of the classic upper management/CEO "ideas". You know the kind: obsolete, neglects societal concerns (security???), nobody around to tell them it's a bad idea.

> (Why make this product, with its attendant risks, and why now? “Because it’s fun,” he says with another laugh.)

Sometimes you can look at something and just KNOW that there is not a chance that pile of junk is gonna gain traction.


I remember doing just this when first wanting to get a foot through the door... Offered to take on the task of building a website. I did just that and would go on to maintain, secure, and add new features for over a year.

The one thing I wish had been made clear to me beforehand was understanding just how little the non-tech and business savy people knew about what we do. As such, despite how much of a positive impact I had, my pay was not even close to what it should have been (first-year photographers were making more than me). It was a constant battle to explain why certain time was needed to complete various tasks, as well as why I was putting in the hours I had.

When I finally managed to get out of there, feeling underappreciated, it was THEN my once boss realized how lucky he had been. Nobody would come and work for the same pay while being asked to do all that I had done.

So be careful and at the very least prepare yourself if trying to go into smaller companies and businesses.


"I’m living in an ontological nightmare of my own making. It’s jawsome!"

Had me in tears. Also, I completely forgot about that show until coming across this so thanks for bring back some history!


thank you. Upon reaching "...I can perfectly adjust the pitch and yaw of the screen.." I was unable to stop laughing. Well done


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: