Hacker Newsnew | past | comments | ask | show | jobs | submit | windsignaling's commentslogin

How legit is this? In my experience, every time I've updated iOS due to some urgent "security issue", the result was my phone just got a whole lot slower.


that's because it jits code instead of pre-jitting like android does, so when you hit a piece of code that hasn't been compiled yet you perceive it as slowdown since the first time it runs it is emulated.


This comment makes no sense. Both Swift and Objective C are precompiled languages.


swift compiles to bytecode, objective c I am not sure but I think that is native


This is incorrect. Swift optionally compiles to _bitcode_ (+), which is pre-compiled by _Apple_ before distribution, not by your device itself.

(+) Bitcode is now deprecated: https://digital.ai/catalyst-blog/navigating-apples-bitcode-c...


then it makes no sense why apple warns you that your device is performing "background" activities post update, last time I looked at iOS binaries they seem to decompile as-if it was bytecode - but it could also be related to more advanced debug directories haven't messed with internals of these devices since breaking the bootloader became pretty much impossible.


SQLite migration scripts, indexing and vacuuming.


I didn't realize these archive links were still working. Ever since the FBI order in November, these links have just been hanging for me. https://downforeveryoneorjustme.com/ had been saying it was down so I just thought it actually was down for everyone, but I continue to see these links posted. Is the site still working for some people?


The blocks tend to be implemented at the DNS level, so you can usually connect by using a different DNS server. The Tor onion link works fine as well, and that URL is on the archive.today Wikipedia page.

https://en.wikipedia.org/wiki/Archive.today

> archiveiya74codqgiixo33q62qlrqtkgmcitqx5u2oeqnmn5bpcbiyd.onion

via this post on their blog:

https://blog.archive.today/post/711271973835227136/did-somet...


Thank you both, this explains a lot..


I'm surprised no one else has commented that a few of the conceptual comments in this article are a bit odd or just wrong.

> The final accuracy is 90% because 1 of the 10 observations is on the incorrect side of the decision boundary.

Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice.

> K-means clustering is a recursive algorithm

It is?

> If we know that the distributions are Gaussian, which is very frequently the case in machine learning

It is?

> we can employ a more powerful algorithm: Expectation Maximization (EM)

K-means is already an instance of the EM algorithm.


> Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice. The generated data is labeled but we can imagine those labels don't exist when running k-means. There are many applications for unsupervised clustering. I don't, however, think that there are many applications for running much of anything on an Apple ][+.

> K-means clustering is a recursive algorithm My bad. It's iterative. I'll fix that. Thanks.

> If we know that the distributions are Gaussian, which is very frequently the case in machine learning Gaussian distributions are very frequent and important in machine learning because of the Central Limit Theorem but, beyond that, you are correct. While many natural phenomena are approximately normal, the reason for the Gaussian's frequent use is often mathematical mathematical convenience. I'll correct my post.

> we can employ a more powerful algorithm: Expectation Maximization (EM) Excellent point. I will fix that, too. "While k-means is simple, it does not take advantage of our knowledge of the Gaussian nature of the data. If we know that the distributions are at least approximately Gaussian, which is frequently the case, we can employ a more powerful application of the Expectation Maximization (EM) framework (k-means is a specific implementation of centroid-based clustering that uses an iterative approach similar to EM with 'hard' clustering) that takes advantage of this." Thank you for pointing out all of this!


On the contrary, I'm confused about why you're confused.

This is a well-known and documented phenomenon - the paradox of choice.

I've been working in machine learning and AI for nearly 20 years and the number of options out there is overwhelming.

I've found many of the tools out there do some things I want, but not others, so even finding the model or platform that does exactly what I want or does it the best is a time-consuming process.


I don't pay money for it?


I will give you a free beer if I can listen all your personal conversations.


Not just lately. See /r/politics. Sometime in the last 5-7 years /r/science or /r/technology (or both, I forget because I stopped reading) basically became the science/tech versions of /r/politics.


This is interesting because it shows us how a programmer thinks of a problem vs. how a psychologist or neuroscientist would think of this problem and highlights the lack of "human-ness" in programmer thinking.

I'm no fan of schools forcing STEM students to study boring electives but this is a prime example of why that might be useful.

The entire premise of the post is wrong - average pixel value has nothing to do with how orange the oranges look - it's all about perception.

Here's an example where the same exact color (pixel value) can be perceived as either light or dark depending on the context: http://brainden.com/images/identical-colors-big.jpg

That's what the bag adds - context - but the author hasn't made this connection.


You saw someone making a bunch of observations, setting up an experiment and trying to use maths/programming to prove an hypothesis they believed to be a sign of "lack of human-ness"?

To me it showed curiosity and ingenuity, sure they might not have studied a certain subject but it is a totally valid approach to an unknown problem. It might actually get people who have similar "silly questions" to run a similar set of experiment and perhaps stumble upon something novel.

You comment showed less human-ness than OP, ironically.


I read the lack of human-ness as locking at the wrong place.

It’s not reality that changed because of red net but our perception of it.

The solution isn’t in the oranges but our brain


Agreed.


While you are correct about color perception, I don't see the link to a 'lack of humanness in programmer thinking'. It's not an inherent trait to software engineers. The entire field of HCI, interaction design and everything around how we deal with digital colors are fully focused on the human experience.

Maybe a reminder that computer science != programming.


Context absolutely affects how we see things.

But so does its colour.

So observing how a red mesh affects that colour is absolutely worth investigating.


See Josef Albers’ Interaction of Color or the recent and more approachable Interacting With Color.


It's not a "programmer" problem. Any competent program I know would never thing of averaging the color of the orange with the color of the non-orabge bag, and expect that to be orange, or representative of how we percieve the orange.


You clearly have some interesting and substantive points to make! but on HN, can you please do this without putting down others or their work?

It's all too easy to come across as supercilious and I'm afraid you crossed the line, no doubt inadvertently.

https://news.ycombinator.com/newsguidelines.html


See also this popular “optical illusion”: https://en.wikipedia.org/wiki/Checker_shadow_illusion


I haven't installed this yet, but does it require camera access? i.e. does it "transform" your own image to the target image while maintaining facial expression, pose, etc.? Based on the animations, I'd assume it doesn't use the camera since there are techniques that can lipsync from audio.


no camera access needed! it directly generates the image via audio. this is more then just lip sync btw, it's animating the head of the image.


Great course. I highly recommended anyone interested in this topic to check it out on the MIT website, taught by the same authors. They are great lecturers.


Looks like the lectures from a prior version are on youtube too: https://www.youtube.com/playlist?list=PLUl4u3cNGP62EaLLH92E_...


Neural networks are not ML now?


EKF is a neural network!?


I think you missed the point of that comment. I was responding to the comment saying "Parameter estimation is ML now?"

Neural networks are trained commonly using maximum likelihood estimation, a common parameter estimation technique.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: