> It is far too early to be calling that a failure.
Any what exactly do you think the end outcome will be when a) it is impossible to stop omicron and b) the CCP isn't ramping up the vaccination effort on the 80+ yr olds?
If live in a tinderbox with frequent lightening strikes, spending billions on firefighting isn't going to change the end result. HK got to 1/3 the US's death rate in __weeks__.
That's what I thought when it came out initially, but after the series 4 release, I decided to get one, and I use it many many times a day:
* Apple Pay on the watch is the fastest way to pay on any NFC enabled POS; now works on Caltrain as well.
* Silent haptic alarm, which doesn't wake up your partner
* Passwordless macbook unlock
* Automatic phone unlock when using a facemask
* Overnight HR measurement, if you want to track how overtrained/rested you are
* I use it to open my front door and garage door via siri when coming back from a bike ride or run (I don't need to carry my phone for this). I don't carry keys with me.
* Listen to music/podcast while on a run without my phone.
* Control media without taking out my phone.
* Vectorized maps and gps tracking for running/cycling and uploading to strava (workoutdoors app)
* Strava app
* Interval timers when at the gym
* Easy continuous visibility of air-quality/UV/temp-range/hours-of-sunlight-remaining on my main watch face. Useful information when planning outdoor exercise.
* I occasionally do high altitude climbs, and hikes, so I'm looking forward to upgrading to a newer model with continuous elevation readouts and SpO2 readings.
* Use it to ping my lost phone
* Flashlight when you're in a bind
* Navigation while cycling
* Pick up calls when you're hands are full and you're phone isn't nearby.
That’s a bit tongue in cheek. Then again, I didn’t wear watches before; time alone didn’t seem a good enough reason. But that, combined with other benefits, works for me.
In the first months after starting to wear a watch (smart Apple or any other kind, really), I kept forgetting that it also tells the time. The habit of looking up the time on the phone was so deeply ingrained, if I tried to find out the time while in a bit of autopilot mode, I'd always reach for the phone, instead of looking at the wrist.
> I occasionally do high altitude climbs, and hikes, so I'm looking forward to upgrading to a newer model with continuous elevation readouts and SpO2 readings.
I also use it for this reason, but I've found that these activities can often be much longer than my the battery can last on my S6 in airplane mode with an activity running (~6 hours).
For this reason I'm considering a watch with better battery life built for this purpose such as Garmin or Coros. The recently announced Coros Vertix 2 has a mind-blowing 140 hours battery life with full GPS tracking [1].
I don't foresee Apple reaching this level of battery performance anytime soon, specifically because of the other reasons you list why the Apple Watch is useful. I wonder if we'll see a trend of people owning multiple wearables for specific reasons, with the Apple Watch being a daily driver.
(Unrelated, but kind of sad that I got more information on how airtags actually work from a bike components review channel than dedicated tech reviewers)
Most tech reviewers publish the reviews right after the Apple embargo ends or just a couple of days after getting the product. For something like AirTags, where you want to try more complex usage scenarios, this is not optimal.
Keep in mind the idea of "other iPhones will help you find your tags" was not even a thing you could test for when the first reviews came in, since iOS 14.5 was not out yet. Testing Airtags, therefore, was extremely complex for some edge cases and the experience was not as serendipitous as it would be now.
It didn't help that Apple was not very open on how the anti-stalking features work exactly, for obvious reasons (you don't want people to figure out how to bypass them).
>It didn't help that Apple was not very open on how the anti-stalking features work exactly, for obvious reasons (you don't want people to figure out how to bypass them).
This is not a very good reason at all. People have already worked out basically every detail a week later as well as how to remove the speaker.
All this did was spam the internet with billions of questions confused about these features and if they will be left with a bunch of ringing tags if they go on a holiday or take a bus.
> Keep in mind the idea of "other iPhones will help you find your tags" was not even a thing you could test for when the first reviews came in, since iOS 14.5 was not out yet
My understanding is that this is not actually true. AirTags using the existing "Find My" network of devices and iOS 14.5 is not required for Airtags to be tracked in the wild. iOS 14.5 is required, however, to pair to an AirTag as the owner or get the "this AirTag is stalking you" notification.
so it's nice that it notifies me there's an airtag on me but only when i get home - however if someone's nefariously tracking me, at that point it's too late.
Well, the problem here is that there's no easy way to prevent false positives otherwise. Imagine if you leave your phone at home but take your keys with an AirTag to go shopping using public transportation. Everyone on the bus or the train will be getting notifications about an AirTag that is following them. Same if you lost an item with an AirTag in that bus or train even if you had your phone with you when you boarded it in the first place.
That's why the alert only pops up when you get to a known location (home/office).
As for Android, I wonder if an app could be made for phones equipped with UWB radios that alerts you if a tag is always in close proximity, just based on the tags radio activity.
You don't need UWB for safety alerts. Simply bluetooth is enough. These tags send out a message every 2 seconds and have a predictable part of the rotating key so if you see the same fixed portion multiple times in a row, its very likely to be the same tag (unlikely that multiple different tags were near you one at a time with exactly the same fixed part) but if you do something like set up tracking stations at malls, there will be too many duplicates.
Or if you have an iphone and someone plants a samsung tag on you, you also get no notification. Someone could also just follow you home. Airtags at least prevent long term stalking which is difficult to pull off without them.
The cat is well out of the bag now. I think Apple, Google and Samsung all need to get together and standardize safety alerts.
Tech reviewers are too busy praising Apple and calling everything they make magical. There are very few real tech review sites that test devices based on objective criteria. They don't get as much traffic compared to websites like The Verge.
The Verge still has quality content as long as you keep in mind almost all of their reviewers live in the US, they are all in the Apple ecosystem, and they wouldn't be able to talk their family and friends if they switched to Android anyway.
They have no reason to be enthusiastic about anything that's not an iPhone or an iPhone accessory.
> There are very few real tech review sites that test devices based on objective criteria. They don't get as much traffic compared to websites like The Verge.
The industry has a very carrot-stick mentality. Gamers Nexus for example gets shit on by everyone especially when they call this shit out while the "real gaming" channels just go along with anything to get review copies.
Well that’s about 50% of the US market that you’re describing and honestly the US market is the number 1 priority for companies, followed by China. Other countries are faaaar behind in third+ place.
There are two main reasons why I say nobody besides Google is really allowed to crawl the web.
The first is that Google gets much more access to pages on websites than everybody else. You can see this by examining the robots.txt files of various websites[0]. I've been doing this for several years now and Google has a consistent advantage across many thousands websites that I've looked at. This adds up to a significant advatnage and many search engine operators complain about how it hampers their ability to compete with Google[1].
The second is that Google gets to ignore crawl delay directive in robots.txt while other search engines don't[2]. Website operators cannot tell Google how fast they want their website crawled, they can only request that Google slow down. If another search engine tried to do what Google does, they would likely be blocked by many important websites.
So, uh, don't respect robots.txt in your search engine? It's not like there's a law that you have to, and that you can't pretend you are Googlebot. The only real obstacle I can imagine is that some firewalls might be configured to be more permissive with traffic originating from Google subnets.
You would be blocked fairly quickly by many website operators and no longer able to access those websites if you straight up ignored robots.txt files. You also might even end up being served cease and desists by some websites and sued if you continue to persist and try to find ways around it.
Applebot was able to get away with doing exactly this but I imagine that's because it's Apple and websites knew that Apple was about to send them enough traffic via Apple News to make it worth their while. I don't know if other search engine operators have tried this but I would imagine they would get caught by rate limiters set for non Google IP's and then they would be blocked.
Still, you keep saying all that as if most websites even notice that they're being crawled, and that their operators are very aware exactly when by whom they're crawled. Like as if the admin gets a notification every time a crawler comes by or something, with precise details about it. I don't think it's nearly as serious as you're trying to make it look.
I've been a part of a team that operated a large website and I've been paged before because of the issues that somebody was causing because it was being crawled too much. Many people in the web operations field have had the same experience. Generally speaking, the larger the website, the more sensitive they are about who is crawling and why.
To add another data point for you: I have had one of my websites brought down by Yandex bots before. There are also dozens of no-name bots (often SEO tools like ahrefs, semrush, etc.) that can sometimes cause troubles.
For me it was a problem of having lots of pages, and having a high cost per request (due to the type of website it was).
For other websites, it is not necessarily about the volume of traffic from bots, but the risk of web scrapers getting their proprietary data. They're fine with Google scraping their info because that's where their traffic comes from. They're not okay with some random bot scraping them because it could be taking their content and republishing it, or scraping user profile data, or using it for some nefarious/competitive purpose.
> the risk of web scrapers getting their proprietary data
That's some weird logic, to me at least. That data is literally given away to everyone but some people or organizations can't have it? If you want to control access to it, maybe at least require people to register before they can see it? Is it even proprietary if it's public with no access control whatsoever?
This for-profit internet is just really such a parallel universe to me.
> This for-profit internet is just really such a parallel universe to me.
I know I have been a contrary commentor in this thread, but I hear you with this. What a monster we have built, and what always gets me is how trivial everything is. So much capital is flowing through these ephemeral software systems that, if gone tomorrow, would be ultimately inconsequential to mankind.
I mean it's ridiculous to think about it, but there's this giant, many-billion-dollar online marketing industry that I essentially don't exist for. If it's gone tomorrow, I would indeed not notice, but it'd be the end of the world for some.
> and what always gets me is how trivial everything is
Whenever I read about corporations and how they work, I always inevitably ask myself the question "where the hell does enough work to keep this many people busy even come from". Everything is ridiculously overengineered to meet imaginary deadlines.
> That data is literally given away to everyone but some people or organizations can't have it?
It's often a question of quantity. LinkedIn probably doesn't care about you scraping a few profiles, but if you're harvesting every bit of their publicly-available data, then they get a little scared that you're building something that's going to compete with them.
Same with Instagram, or Facebook, for example. Though in this case it's probably more of a user-privacy issue - at least that's what they say.
It's not really weird logic to me - seems to make sense.
> If you want to control access to it, maybe at least require people to register
Most of the time they can't do this because they need the Google traffic. LinkedIn wants a result in the SERP for Bob Smith when you search for "Bob Smith" because that helps them get signups. Google won't list the page if that content is gated by a sign-in/register page.
There are syndicated blacklists that get fed into automatic traffic filters. Not to mention a surprising amount of the web is fronted by Cloudflare and other CDNs, making that kind of traffic detection and blocking more effective and widespread than you might expect.
It's a situation where the rules seem obvious but the practical realities of it mean Google has the advantage by being the incumbent. No one would dare block Google for a search traffic reliant business, but some upstart search engine will quickly end up on blacklists even with reasonably slow crawling.
These captchas are used to crowdsource training data for semantic segmentation ML models. By shifting the image around, users statistically fill out the boundaries of objects by selecting which squares include the object. As a result, in many captcha instances, you see objects right at rectangle boundaries.
I got the Modi2/Magni2 set for my HD650s when they originally came out. Total disappointment: pots had static when inc/dec vol, and had balancing issues. The headphone amp on my macbook didn’t suffer from any of that, was indistinguishable in sound quality with a blind test, and was way less hassle. Only difference was that it didn’t get as loud, but since I don’t listen past the macbooks max, I put them in the closet and never looked back.
That's unfortunate. I use planar headphones, so a dedicated amp is pretty much a requirement due to the power draw vs. regular dyamics. I opted for a Schiit Bifrost/Asgard stack (they didn't have the magni/modi combo at the time) and had great experiences with them. I opted to perform one of the upgrades for the Bifrost myself, and after that had some issues with the DAC resetting and audio cutting out - emailed Schiit, told them what I had done to troubleshoot, and they shipped me a replacement upgrade part the next day.