Hacker Newsnew | past | comments | ask | show | jobs | submit | ayane_m's commentslogin

An index of environmental anthropological works concerning the conditions that we denote as "Anthropocene" - click the key at the top left to see the complete index


Thank you so much for sharing this. Simply stunning work!


I don't understand the outrage - perhaps this is just poorly worded, and they mean you can't sign in after resetting or signing out?


Can the page title contain markup? On my browser, I see `<i>g</i>` in the title on the tab.


I'm not going to knock Tesla for trying a primarily visual-only approach, but I will knock them for insisting that it will work out to level 4 autonomous driving with just software.

I predict that these kinds of unexpected edge cases will keep popping up for several years. It certainly will be interesting to see how robust their image processing algorithms get as time goes on, but I wouldn't hold my breath for reliable full self driving for a while.


I would hold it against them. I think we should start with as many sensors as is necessary to get it working robustly. Over time we can then strip one or the other sensor as we see what we can do with software improvements.

First get something working, then reduce costs, not the other way around.


With hardware, comes software. Having fewer targets for your software can simplify the software and lead to superior results. Edge cases will always be there. You might be introducing some in a particular area with their vision-only approach, but reducing them substantially elsewhere. With radar, they had phantom breaking events and according to them, it's the primary reason they want to ditch it.

I'm on the fence as to whether it's a good or bad decision ... but let's not pretend they're idiots.


Wouldn't two cameras and parallax show that the moon is a bit too far away to worry about?


It would if the cameras were good enough and plentiful enough and can be rectified. It seems pretty clear from all of the failure stories that Tesla either does not do stereopsis up front to establish geometry at all or doesn't do it enough and instead relies primarily on blackbox identification NNs.


Parallax beyond the distance equivalent to that between the cameras very quickly drops the distance accuracy possible to resolve into the noise floor.

E.g. just a quarter mile is already in the noise.


> just a quarter mile is already in the noise

There's no reason to care about precisely measuring things a quarter of a mile away. Human drivers don't.


You can do structure from motion on a single camera without the need for stereo.


I agree. HN reminds me of the early days of reddit (pre-2010) where STEM people were over-represented compared to other popular websites at the time. The comments here often tend to be informative and insightful, and after catching up on the latest posts, there isn't much of an incentive to stick around. It's just how bulletin boards from the early days used to be, and reminds me of some small forums.

Reddit, on the other hand, has been a difficult addiction for me to break. It's gradually been growing into an alternative social media platform, and every new feature addition to reddit indicates such a transition.

It's still funny that redditors seem to be self-aware of their reddit addiction but somehow perceive Facebook et al to be worse. Different strokes for different folks, but at the end of the day, the mechanisms of addiction are similar regardless of the platform.


The FUD around Rusting the kernel reminds me of similar sentiments surrounding autonomous vehicles. Better != perfect; it's an evolutionary step. Even if there are unintended negative consequences of Rust code, if they are less frequent than the rate we deal with bugs today, then it's worth using.


Rust in Linux has a known constantly showing negative consequence: it introduces another language to the stack. Furthermore, Rust is quite different from C. This makes the whole massively more complicated, and increases the amount of knowledge needed to understand the whole.

The same effect applies every time a new language is introduced, if it doesn't completely replace the previously used language. In this case, Rust won't.

Zig might be a better fit, given how much more similar to C it is.


> Rust in Linux has a known constantly showing negative consequence

It'd make your argument a lot stronger if you can actually list some of these negative consequences. "Another languages to the stack" and "Different than C" is just complaining about change because it is change.

What negatives have already been shown that isn't just about "This isn't C"?


Mixing any two languages in any single code base creates significant friction at the boundaries, and adds new degrees of complexity in major areas (build system, tooling, debugging...). If we're talking about a project as complex as a production OS kernel, this kind of a decision should never be taken lightly. It's a much smaller step from 2 to 10 than from 1 to 2.


> It's a much smaller step from 2 to 10 than from 1 to 2.

But here, you're already starting with 2: C and assembly. Besides inline assembly, a small but very important part of the Linux kernel is written in assembly on every architecture: the system call entry point (entry.S) and the kernel entry point (head.S). And if you consider each architecture's assembly as a separate language, it's more like 10 languages than 2 languages. I'm always impressed whenever I see changes to for instance signal handling or thread flags which touch not only the common code in C, but also the entry point assembly code for each one of the many architectures Linux supports; whoever does these changes need to not only know the assembly language for all these architectures, but also have at hand all the corresponding tooling and emulators to compile and test the changes.


You do have a point, however (as you noted) the lowest-level bits of an OS kernel are practically impossible to build (and subsequently, maintain) without precise control over the machine code; you can't even start a hobby OS kernel project without relying on assembly. It's a part of the deal; a pure-assembly kernel is more feasible than one without any. You also (as you pointed out) still have to be mindful about the C-asm boundary; the integration doesn't come free.

The story here is pretty different: integrate a new, high-level language into a 30 year old, 30mil SLOC, production code base, that billions of people rely on every day, AND actually extract some value from that work.


A very obvious one is that by adding another language, you are adding more complexity.

It's not as if C is going to disappear from the kernel as it's something like 25 million lines of C code, and if Rust was to be supported, the current C experts who are maintaining various subsystems will now also have to become Rust experts, so that they can effectively accept or reject code contributions in that language.

Personally it just seems illogical, better to make a new kernel in Rust is you really want to use that language, than converting small parts of a HUGE C kernel. Google has been pushing for the inclusion of Rust into the kernel, it's weird that they are not writing their own shiny new Fuchsia OS kernel in Rust, instead of C++.


Another way is to sponsor and help to develop Redox OS[1] instead. It has a kernel completely written in idiomatic Rust[2].

[1] https://redox-os.org/

[2] https://gitlab.redox-os.org/redox-os/kernel


It's funny how frequent people bring this up, but the truth is simple, check here [1]. Zircon kernel is not new [2], it has been in development for a while now. By the time the started to work on the microkernel, Rust 1.0 was really new, so they would've to implement several things from ground up. There's a implementation of Zircon in rust called zCore [3], but I don't know how stable and feature complete this one is.

[1] https://twitter.com/cpugoogle/status/1397265884251525122

[2] Like a two years project.

[3] https://github.com/rcore-os/zCore


Why is it a bad thing that Rust requires you to learn more things? As 'mjg59 pointed out recently, the kernel dev community intentionally asks you to learn more things unrelated to your code as a means of keeping the "bar" high and fielding only committed contributors. Isn't it all the more reasonable to ask people to learn a programming language? https://twitter.com/mjg59/status/1413406419856945153

Rust isn't terribly hard to learn, especially for a kernel developer with a good understanding of C and of memory. You can pick up the basics in probably an hour. A lot of its design choices match approaches the kernel already takes (traits are like ops structs, errors are reported via return values, etc.)

And Rust is a language that plenty of college students pick up for fun. Professional kernel engineers should be able to learn it just fine. Frankly the hardest thing about Rust is that it makes you think deeply about memory allocation, concurrent access across threads, resource lifetimes, etc. - but these are all things you have to think deeply about anyway to write correct kernel code. If you have a good model for these things in C, you can write the corresponding Rust quickly.

In fact, learning Rust and thinking about Rust's concurrency rules has made RCU a lot easier for me to understand. RCU is famously a difficult concept, but the kernel uses it extensively and expects people to use it. So "requires little knowledge and is easy to understand" is not an existing design goal of the kernel - but having people pick up Rust might help there anyway.

(Zig seems like an entirely reasonable choice too. Send in some patches! :) )


> Rust isn't terribly hard to learn, especially for a kernel developer with a good understanding of C and of memory.

I'm not so sure. I program in C for a living (embedded, for almost 20 years) and believe me that I tried learning Rust, but when I see something like:

    _dev: Pin<Box<Registration<Ref<Semaphore>>>>
I cannot even image the knowledge code like that might require, its implication, the result, the reason why it was written like that. It's confuse. It's seems like something a trying to workaround a language limitation. Not nice at all.

Source: https://github.com/Rust-for-Linux/linux/blob/rust/samples/ru...


Adding Rust complicates things, but Rust makes writing correct code easier, which is no small feat in the kernel world. The added complexity may be big, but it's a one-time cost compared to the stream of Rust code that one can hope for.

Rust is known to be hard to learn (YMMV), but C is even harder. If things go according to plan, someday for some use-cases you'll be able to contribute kernel code in pure safe Rust without having to learn C. In the meantime, adding Rust doesn't seem to be such a big ask when you consider what the kernel already has beside C: Assembler, the "C preprocessor (yes, it's actually a different language independent from C, and some kernel macros are really complicated), the BPF an io_uring APIs (essentially their own DSL), and a myriad of other inner-platform curiosities you might need to deal with depending on the kind of kernel work you do.

Concerning Zig, the cons may be smaller then Rust, but so are the pros. IMHO it's not worth it in the current context (I like Zig but it seems "too little, too late" to me). But there's no telling until somebody puts in the work for a "$OTHER_LANGUAGE in the kernel" RFC like is currently happening for Rust.


Zig's syntax has nothing to do C, and overdoes it with @ everywhere.

Until it fixes use-after-free, better keep using C anyway.


What Zig shares with C is orthogonality, with a large power-to-weight ratio, meaning it's a small language grammar with powerful range.

But Zig also improves on C's safety in many ways, not least checked arithmetic enabled by default in safe release modes, along with bounds checked memory accesses, function return values that cannot be ignored, checked syscall error handling, explicit allocations, comptime over macros, a short learning curve and remarkable readability.

It's hard for systems programmers not to appreciate any of these qualities in isolation.


A brilliant language wouldn't stuffer from use-after-free in 2021, or use file import as module concept.


Syntax is almost completely irrelevant.


If syntax was irrelevant, ALGOL like systems programming languages would still dominate.


Not Lisp?


Lisp failure in mainstream market is more related to mismanagement, cheap UNIX workstations and AI Winter, than syntax.

But hey, its spirit lives on most managed languages, Julia, Closure, WebAssembly text files and plenty of other stuff Lisp based.


We, the global community of software developers, are in the process of putting C out to pasture, with Rust as the de facto front runner as a successor. At this point it becomes a question of either admitting Rust into the kernel or, eventually, using another kernel written in Rust.


The same thing was loudly proclaimed about both C++ and Java, yet C is still here.


C++ was hampered by the same safety problems C was. And Java had a VM and GC, which cripple performance and determinism. Rust solves both those issues.


We need to keep in mind that are 2 types of autonomus vehicles discussion. We have a company that uses a ton of hardware, radar, lidar and many cameras and then we have the other ones that want to move fast and break things using only cameras, a GPU and many beta testers. It is normal that the aproach of brute forcing it with ton of data gets a ton of criticism.

My other criticism is the bad statistics used. It is like I create the "robot athlete" and I compare it stats with the average of all athletes including the young children and the people with some physical problem. You should compare self driving with cars with exact same safety features and save driver demographics. Bonus if you calculate all deaths caused by illegal driving and then ask the giants WTF not put the money into first solve the speeding and drunk / tired driving , I bet ANN work better on this problem.

Rust in Linux seems to me a waste. IMO the Unix philosophy is great but it needs a better implementation , one that is based on the present day hardware and expectations.


> Bonus if you calculate all deaths caused by illegal driving and then ask the giants WTF not put the money into first solve the speeding and drunk / tired driving , I bet ANN work better on this problem.

Why would they want to do that though? No one would buy a car with those features. I guess you could get a few people to buy one if the insurance was way lower, but you certainly wouldn't get decent market penetration.


So say someone builds a system that you install in cars that monitors how you drive, it can detect if you respect the speeding laws, if you drive normal or you have risky behavior. Few will want this in their car but they will want it on the others car so I can see this possibilities:

- a law that requires it in all cars

- a law that requires it in new cars

- a law that requires it only for new drivers (less then 2 year) and for people caught speeding/drunk.

- a law that will make certain roads only available for self driving cars or for people with this safety system. It would be a compromise between AI drivers camp and people that still want to drive themselves.

Maybe you will say something about privacy, this system can be implemented with no connectivity to spy on you. It could be read only when you want to pay your insurance or in case of an accident.

But also AI can be implemented in a different way, like you could have some quality cameras recording traffic and have the AI detect who is using the phone, who is not paying attention , who is moving in a erratic pattern , so it would be an advanced spending camera.

The argument is that the AI driver camp will demand humans to be removed from the road, because their AI driver is better then the average. To prevent you losing the right to drive then this average needs to be improved and this can be done by removing the bad drivers and a good place to start is the ones that do not respect the rules. Otherwise you might object to install a safety system in your car but then the big companies will lobby hard and you will have to use the Tesla/Google or Apple AI powered cars, then not only that he government will know all about your movement but the ad companies will know it too.


Then maybe the Rust community would be better served by not hyper-evangelizing it so much and at the same time bashing other languages?

There's a perception issue here: people think that Rust people (not necessarily the maintainers or official evangelists, but the community at-large) think Rust is as close to perfect as you can get because of its safety features, because these people talk about Rust like it's a universal problem solver.


I kind of agree, that is why you tend to see for-and-against comments from my side.

Despite the plus sides, there are lots of incumbents, certain domains are better served by managed languages, and regardless of our wishes C and C++ have 50 years of history.

Even if we stop writing new code in those languages today, Cobol shows us how long we would still need to keep them running anyway.

Microsoft, for example, despite their newly found love with Rust, is full of job ads for C++ developers in green field applications.


This points to the larger perception issue that "anybody who advocates for Rust is part of the Rust community and/or knows Rust well". But there are many Rust evangelists who obviously don't know much about Rust (this is not Rust-specific, it's a common issue in tech). This kind of "positive FUD" is ultimately harmful, as outsiders understandably get tired of the hype and start ignoring any pro-Rust argument, good or bad.

In my experience, the community of actual Rust users is much more level-headed. While most do love the language and the "this aspect of Rust is irrefutably better than the equivalent in $OTHERLANG" opinion occasionally pops up, the community seems pragmatic and well aware of Rust's cons. Case in point: the "should I use Rust" questions on the rust subreddit don't get dogmatic answers, and often result in "Rust isn't ideal for your use-case" advice.


It seems pretty clear that rust in linux would reduce certain kinds of run-time bugs. What isn't so clear is whether, overall, rust improves linux or not.

There's bad to be weighed against the good. Adding complexity strains and breaks processes, slowing development. Among other things, this means additional bugs and lets them survive longer, so it's not even clear rust is a win purely from a bug perspective.


> The FUD around Rusting the kernel reminds me of similar sentiments surrounding autonomous vehicles

That, but in the opposite direction.

Decades of promises of self-driving cars. And still nothing able to drive without a driver.

There have been small improvements...cars that have autonomous abilities in some cases.

But overhauling the entire driving fleet of the world to use 5-year-old technology....it's not a realistic expectation.

There are smaller, more practical expectations.


> Decades of promises of self-driving cars. And still nothing able to drive without a driver.

Waymo have driven 20 million miles autonomously since 2009, and as of late 2020 claim 74,000 of those were done completely driverless. They're still a far cry from being common, but they're here and they're impressively safe.


But what is the “miles on straight-ahead, well-lit US highway” vs. “kms on random European regional road” ratio?


> They're still a far cry from being common

Sounds familiar


The effect largely goes away if the frequency exceeds 4 kHz, as even the quickest saccades would see a blur rather than a flicker. putting the flicker above 20 kHz would be ideal, because that way it is well beyond both visual range and even any noise generated by the circuit would be inaudible. The real question is why higher frequencies aren't used for driving LEDs. They have very low cycling latency, so it's a no-brainer.


In the mini-led industry the clocks are often much higher. That said, you are limited by scan (how many LEDs driven per driver chip), because the clock is divided among the driven LEDs.

Consider that raising the PWM frequency doesn't automatically solve the problems you raise. 20khz PWM can still have audible harmonics, for example. In the mini-LED industry there are actually spread spectrum/randomized PWM approaches to address this, and to help with EMI.


> 20khz PWM can still have audible harmonics... there are actually spread spectrum/randomized PWM approaches to address this, and to help with EMI.

!! Great.. now I have to consider Tempest shielding for all the displays; I had thought once CRTs went away, we'd be safe...


> They have very low cycling latency, so it's a no-brainer.

I read somewhere that that makes it difficult to accurately control the brightness. With HDR content already requiring 10 bits, anything that interferes with that is a problem. Future panels might require 12 bits or more.


Arguably, it is medical. Substance use disorders, of which cannabis can be an example, are medical disorders which are better treated medically than with punitive justice.


Everything from chronic theft to murder to and many things in-between could all be viewed as medical disorders in many cases. That isn't particularly relevant to whether or not it should involve jail sentences. Cannabis is not an addictive substance more than many other crimes.

Pointing out cannabis isn't harmful is what I'd expect a surgeon general to say (and I think they should say), but he didn't. Instead, he made a "value" argument which is strictly non-medical.


It's a very interesting piece of hardware. Sadly its convoluted design is the result of development constraints to extend the functionality of the Mega Drive. Apart from emulator development, there isn't much use for the hardware today - it's not even well-represented in the demoscene.


I don't understand why anti-cheat requires invasive software. Encryption can be used to communicate with the server, and the server can then authenticate the client's state. The application itself can use tokens to prevent the user from prying in its address space via a rigged kernel.


Not sure why you are being downvoted for not understanding and asking a question to fix that, as the matter is relevant to the post you replied to...

Essentially anti-cheat code needs that level of access to detect/circumvent cheat enabling code that has that level of access - it is a protracted arms race. There is money and kudos to be made through gaming, so people will cheat by any means necessary.

You can't remotely prove the entire state of the client unless you entirely control the client, and no current OS can offer the level of sandboxing required to offer that assurance. If you can't 100% trust the state of the client then no transport level encryption and such will fix that - you are just guaranteeing the faked data is transported safely at that point.

Of course that level of control being required for single player games is much more dubious, so there is a grain of truth in the more tin-foil-hat sounding theories about identity tracking & such on the part of the publishers.


Anti cheat is the excuse used to collect saleable or exploitable private data, and as a mechanism to perpetuate walled gardens. Centralizing accounts and identities to enforce ban lists, associate payment methods, target advertising, enable microtransactions, and so on are the reason for the security theater.

Rent seekers will extract as much cash and time from players as can be gotten away with.

Server level ban lists and competent game referees and volunteers could be a powerful answer to the problem, but there's not a lot of incentive to innovate away from rent seeking, as the big studios and stores crush any threats to their success.


Server level ban lists and competent game referees could perhaps solve this for high level play, but there’s no way you could scale that out so that the average player of a first person shooter doesn’t have to deal with cheating. That said, I’m not really a fan of things like Vanguard if for no other reason than it’s not clear that they have helped much beyond making cheats somewhat more expensive but not enough to be much rarer.

It’s also worth noting that multiplayer games without anti-cheat have had centralized accounts and microtransactions for a long time, so I’m not sure I understand how the anti-cheat measures are furthering those.


Ideal solution is to make "cheating" impossible by design rather than by trying to ensure trust.

Don't send the data player is not supposed to know. Don't trust clients to just tell the server what happened. And - I realize this is extremely controversial - ideally don't design games on pure reaction speed, visual acuity and mechanical dexterity where a sophisticated enough machine would consistently and unpreventably beat any human.

I believe we should be able to compete with bots just like we're able to compete with humans - and not because bots are handicapped and constantly toss coins deciding if they want to let the puny meatbag win.

I'm curious if there are any special tournaments where "cheats" are encouraged and even required, not prohibited. Would love to see a FPS where you have all the software aid you can think of. Texture hacks become enhanced vision aids (server may toss a coin and enforce camouflaging by not sending any information, though), auto-aim is smart munitions (so we don't compete on whoever has the faster hands or a better mouse -- see, it's already a competition of machinery!), last-seen markers and sound source visualization are tactical HUDs, and if you want some other feature you're free - as your competitors - to implement it. Naturally, if that's based on an existing game that would require heavy re-balancing of its rules (e.g. nerf of one-shot-kill weapons or buff for supports so in a teamplay they can save their teammates from such weapons). That would be a whole next level e-sports, true to the name.


> And - I realize this is extremely controversial - ideally don't design games on pure reaction speed, visual acuity and mechanical dexterity where a sophisticated enough machine would consistently and unpreventably beat any human.

What games, other than turn based strategy games or puzzle games, don't have the property you list here? And even then, you can absolutely cheat at chess. Like it or not, people want to play shooters online.


> What games [...] don't have the property you list there.

A game of almost any genre doesn't have to be designed purely on those neural and mechanical skills.

(And puzzles are actually a bad example, as many can be unimaginatively played by a machine, and machine typically wins in terms of the computing speed. Unlike a complex strategy game where bots don't necessarily dominate human players.)

My previous comment had even hinted at how a FPS could be designed to not depend on one's eyesight difference, sleight of hand or gaming chair performance. If everyone has a perfect aimbot by design, tactics and teamplay (if that's a team game) becomes a deciding factor in a shooter. If everyone has a helper AI that alleviates mundane clicking you don't have to put your mouse on fire doing that 9000 APM micro - the actual strategic thinking and planning ("macro" rather than "micro") makes more important in winning a tower defense or RTS game.

I'd wish I could just write a whole implementation idea, but I'm no game designer. I just believe that things could be designed and balanced in such a way. I can be wrong, but I don't think it's proven yet (given the modern status quo of "don't you dare think of any aids or tools but those game designers have very very explicitly allowed").

It's just that most games out there were never designed for this so their gameplay becomes extremely unrewarding, as there would be a huge imbalance if everyone is mechanically perfect. Which is probably why there is no cheaters' competitions.

Maybe I'm thinking about a different genre, something like first-person-shooter-but-not-FPS?


I wonder if more low tech solutions could work. What if you added random short screen blackouts or unexpected lags and see if perfect input still comes through. Anything that a human would react to but a bot won’t.


If your shooter intentionally drops frames or adds network or input lag, nobody is going to want to play it. This is the genre that generates most of the consumer demand for > 60 Hz refresh rate monitors.


This is so incredibly wrong, I'm not sure why you come to a forum and lie openly like this. Anyone who has reversed modern anti-cheats will disagree with this statement.


Maybe they take advantage of it but no it's not just an excuse. 10+ years ago hacking in games was very common and annoying.


This is true also for other SW. Now everybody has telemetry and services running with elevated rights. I think the future is to treat these programs as malware.


This is incredibly wrong. One of the main USPs of the service that I'm working on right now, fastcup.net, is a third-party anticheat. People use our service exactly because they trust our anti-cheat to provide a better value than what CS:GO has by default.


"Thanks" to competitive gaming involving real money incentives these days, cheats have reached the level of custom PCIe cards directly accessing kernel memory via DMA.

So a kernel rootkit is the bare minimum to try and detect these attacks, preventing them isn't even on the table any more.

In the distant future, you might be able to bypass it with hardware-authenticated homomorphic encryption, but that's still way off.


The client can just lie about its state.

The only solution that is deterministic would be to move all rendering server-side. You could guarantee a fair match as long as participants are within some reasonable distance to the server.

Note that this has other massive benefits if you can build for it natively...


> server can then authenticate the client's state

I'm sorry, but how exactly would it work? Do you mean that the server would authenticate the whole memory of the client's process, a few hundred megabytes at least, in order to make sure that there's no code that switches alpha of the wall textures to 0 every 5th frame, for example?


That has always be the promise behind Trusted Computing, yes. Maybe in another 20 years Intel will actually deliver a reliable implementation of it.


Doesn't require it. Anticheat for multilayer could all be done server-side, by peers (started by vote), on demand or by peers on demand; all by just checking the player could do what their client claimed to do.

It's just event logging and replay.

But checking drivers and secure connections is easier.


Event logging is irrelevant if you have incorporated certain optimizations into your game.

For instance, many forms of netcode necessitate revealing slightly more information to players than you otherwise would want to. The world coordinates of player footstep sounds is almost certainly some information flowing across the network.

All you would need to do is intercept this information on the network and view it on an entirely decoupled system in a 3d coordinate space - potentially one synchronized to your player character using similar snooping tactics. Valve has done a pretty good job at making this harder with asymmetric encryption, but its still something the client can ultimately decode or otherwise you wouldn't hear shit during a multiplayer match.

Trying to lock down/validate the actual gamer's PC is a fool's errand. Just go back to first principles in information theory to see what a joke this is. If a certain fact made its way to a player's computer (or simply their home network), you should assume that they know it in the most adversarial way possible and model for that outcome. Obfuscation is just playing yourself in the long run.


How does checking if the client claimed to do something possible answer questions about if the player had the skill to actually do what they could have possibly done?


Because many cheaters do things which are impossible. This is low hanging fruit that we're told we need a ring0 driver to have access to EVERYTHING. Stupid things like tracking other players through walls. Still common because it's so damned easy. You play back their events and you see the cheater always knows where to go, where to hide. There are also exploits. These can all be unit tested away.

But there are cheats like kickback compensation, hitbox tracking. You can apply statistical models and find unlikely consistency but it's hard to say for certain.


How do you "use tokens" to prevent prying in its address space?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: