> But the "Sign in with Facebook" button didn't appear by magic on the Android app, did it? Someone had to add support to their app for it.
It would be nice if you could add an identity provider to your OS and the app could ask the OS to identify you using your provider of choice for that application.
I'd say not too much from a high perspective. Browser support for new HTML5/CSS3 specs got better, so it's safer to use stuff like flexboxes now. But resources for learning the basics from then should still be relevant, and potentially got updated anyway.
Regarding JS, there's an ongoing shift of hype between various front end frameworks like React, Angular or Vue. Others go back to vanilla JS, as it has better standardization and more power nowadays. But maybe that's not too relevant for you.
CSS Grid is now available and constitutes a major change in how layout can (should) be done. Of course there should be fallbacks so older browsers can still get a page that can be used (not pixel perfect).
Writing JavaScript using newer ES5, ES6 features and transpiling it has become more common.
I haven't coded a page in years. So, pardon my ignorance, but is Bootstrap really required?
I ask because I too am marginally interested in creating something. Though, I suspect I'll just cheat and use a CMS like Wordpress and a bog-standard theme.
As someone looking to learn web design for my personal site too, from what I can see Bootstrap and similar frameworks aren't really needed provided you're willing to learn all the tech utilised by them and do it from scratch yourself, either for learning purposes or because you prefer to avoid frameworks for some other reason.
The frameworks exist for professional web designers who don't want to redo the same stuff over and over, and want code with maximum browser compatibility and covering edge cases, bullet-proof and so on - not goals that you would probably prioritise when just starting out learning or creating hobbyist sites.
I'm considering using a CMS or a WYSIWYG editor too, but the prospect of creating a minimal, semantically rich and modern site in HTML5/CSS3, is too tempting to not spend the time learning the ropes. Besides, efficient use of these CMS/editors themselves take fair bit of effort, which we might as well employ to learn the foundations, or so I feel. :-)
As tempting as it is, the project I have in mind will be time consuming, so the devoted time is probably best spent generating content - in my case.
Like you, I am sorely tempted to undertake an in-depth learning. Years ago, I wrote a comprehensive HTML tutorial. It covered the entire spec. Of course, that was v. 2.0, so it's hardly valid today.
It touched on CSS, the tutorial, but I'm not sure it was even a complete standard back then. I think it was just a single page in my tutorial.
I would say I'm ten times more productive at 35 than I was at 26. Seek inspiration, self-knowledge and maturity away from the computer screen. Climbing a mountain as a metaphor for making persistent progress with a project works much better when you have struggled to climb a mountain or two in the fog.
British full stack developer. Lots of experience building rails and django applications. Likes React, immutable servers and learning new languages. Dislikes Chef, Angular and Drupal. Looking for small but interesting projects.
It is the latest legal setback for Mr Moore, who was ordered in March to pay $250,000 (£170,000) in damages for defamation resulting from a civil lawsuit.
I may have misunderstood, but didn't this $250k fine originate largely due to the UGC on the website?
AFAIK users didn't actually upload content to the site. They submitted it and then Mr Moore or someone else posted it. I'm also pretty sure he wouldn't honour take down requests.
Is there a large chasm between "upload content" and "submit [content]"?
Is the difference in the latter case just that someone clicks "ok, post this"? I don't think that a quality-review (or lesser, a mechanism to queue and release content to the site slowly over time) turns user-generated-content into site-generated-content.
If Youtube did a quality review, it would still be UGC.
If Youtube did a post-facto quality review (to remove copyrighted music), it would still be UGC.
...
Not honoring takedown requests is a different matter, and just sounds like a dumb call.
Just in general, the difference between having an editorial gatekeeper, and not, is that it gives the site operator actual knowledge of what's being posted. An example of how that would matter is the DMCA safe harbor, 17 U.S.C. 512(c)(1)(A), which requires the service provider not to be aware of facts or circumstances from which infringing activity is apparent.[1] But "how much did you know, when?" is a crucial question basically any time we hold someone responsible for something under the law, so "we knew exactly what was being posted to our site the whole time" is pretty different from "we couldn't possibly keep track of everything that was being posted to our site."
It has done in the past, at least if the person doing the posting added their own editorial comments as I believe Hunter Moore did. However this issue hasn't come up so far because the main lawyer pursuing most of the civil cases, Marc Randazza, strongly believes this shouldn't affect section 230 immunity.
The distinction is very important. The intent of Safe Harbor provisions is not to allow sites to blithely host infringing without possibility of reprisal as long as they take it down when asked. The intent is to relieve sites of the unreasonable burden of both being aware of and policing everything their users do — to allow them to act as sort of a "dumb pipe". If you have personal knowledge of infringing content, you are still expected to take it down even without a DMCA notice — and it is certainly not OK for you to post infringing content yourself.
(Standard IANAL caveats apply of course. I think I have a reasonably good layman's understanding of this stuff since my work touches on this stuff, but you should talk to your lawyer if you want concrete advice.)
I don't think his website would have existed if he had honoured takedown requests. The entire premise was that these pictures are hosted without permission (and according to the Jezebel article about 90% were). If he honoured DMCA requests then that would become widely known pretty quickly. Part of the point of the site was to publicly humiliate the women, so if they had any easy way to get rid of the pictures, it wouldn't have had the same impact.
'Is there a large chasm between "upload content" and "submit [content]"?'
There would have to be the most enormous chasm, because the latter includes every publication that uses the work of freelancers, which is pretty much every newspaper and magazine, and their online equivalents, everywhere.
For values of "anti bullying charity" equal to "company making money from online extortion", but yeah. (Their website Cheaterville has ads for services offering to remove "slanderous" and "defamatory" content from Cheaterville for the fee of $500 per entry. They handle all their advertising inhouse, so at the very least they're entirely aware of where their advertising income's coming from. In fact, at one point those were the only external ads on the entire network of sites and there was no information on how to advertise with them, so they'd obviously cut some kind of private deal with the companies offering these services.)
A great thing about the web is that a multitude of server side languages can be used. While I don't ever want to have to write the same template twice, having everything converge on nothing but Javascript doesn't seem like a very inspiring future to me.
I whole heartedly agree and lately I've been getting worried that too many people in the web community are onboard with a node-centric view of the browser...
It may the same language, but the runtimes are completely different and have incredibly different use cases and environments. Not only that but I'd rather have a healthy ecosystem where the Unix runtime, .NET runtime, JVM runtime, and even interpretive runtimes that are built on top of those runtimes likes Ruby, Python and JavaScript are all considered equally!
That is closer in line to what the real definition of isomorphism is describing. :)
If you are building a service or API, then you can do that in whatever language you like. The natural language for writing client side apps is JS (ok, the only language, for now).
Correct. If you're already writing a bunch of JavaScript for the client-side, then just think about this approach as migrating some of that client, UI logic to the server.
No. Any thick client architecture has to deal with this problem. The current resource-oriented models (by which I mean they're focused on serving http resources) are severely limiting those of us who want to develop a web application.
Some people are side-stepping the issue by saying that the whole presentation layer must be moved to the front-end, but that approach is really incompatible with the web.
If anything, the current state of the front-end is thanks to server-side developers who want to bring their world view to the browser. (Think: MVC -> Backbone / Ruby -> Coffeescript -- apologies to the authors of those tools)
It would be nice if you could add an identity provider to your OS and the app could ask the OS to identify you using your provider of choice for that application.