This sounds just like something my brother-in-law said. I think they are both technically correct and both missing the point. Does a calculator truly understand math when it spits out a correct answer? Of course not. And it doesn't matter. I have been really impressed with chatgpt, and when it comes to shiny new tech I am usually in the poo poo camp. If tech does something useful then it is useful tech. The fact that it is not true intelligence doesn't matter at all. Besides, what's intelligence anyway? Aren't we still debating that ourselves'?
> Does a calculator truly understand math when it spits out a correct answer? Of course not.
Unless you're using a definition of "understand" that implies conscience of self, I would argue that a calculator is a device that understands nothing except (a subset of) math. That's what makes a calculator reliable in ways that ChatGPT is not.
Philosophically speaking it could be argued that no software understands anything, but I think in the context of this discussion "understands" means "has a model of its context and the way one interacts with it", which is something a calculator (and plenty other software) definitely has and ChatGPT has not.
Calculators don't understand anything about arithmetic. They have no circuits for understanding, no code for understanding, nothing that could represent what humans mean by understanding.
They implement a set of physical processes that, when operated and interpreted by humans, can be mapped into a subset of arithmetic. There's a correspondence.
Correspondence is the most useful way to think about it IMO. If there's a correspodence between what the machine does, and things we humans understand, then the machine, as a tool, is useful.
Understanding is a loaded word. It has implications beyond correspondence when humans use it; it has aspects of qualia, of fact vs fiction, of situatedness in a graph of comprehension, of consonance or dissonance with a set of other concepts, and so on.
LLMs in my opinion have a good "situatedness" for words and concepts, relative to other concepts. Qualia - consciousness - arguably doesn't matter. Fact vs fiction, they're very shaky on. Consonance vs dissonance, they're useless at - LLMs IME tend to flatter the prompt, constructing arguments in whatever direction a loaded question leads. There's little to no coherence there at all.
I think this is where things can get kind of interesting, because future integrations of ChatGPT can farm the "real work" out to systems and tools which do have better models of the specific query.
The LLM approach may not be able to replicate the "knowledge" your calculator has, but it (or some pre/postprocessor) may be able to recognize that a given question is actually something a calculator can answer concretely, and then it can delegate the computation to traditional software that really does "know" how to answer the question.
That would work, but it seems antithetical to AI to have to treat every operation as a special case like that. They want GPT to be able to write computer programs but it'll never be able to work completely independent of humans if every possible domain needs its own plugin to be reliable.
I know lots of the SV VC oligarch cult just wants to race forward and create something like AGI that will help them conquer the world and achieve immortality somehow, but hopefully this remains in the realm of science fiction. As RMS says, LLMs at least don't seem to be "it" no matter how much user generated data they ingest, because they do have these inherent limitations.
The far more practical (profitable) outcome for what they currently built is to just make a useful tool, a "smarter" wolfram alpha, and that can be iterated upon by delegating relevant operations to specific techniques that are more applicable to the question at hand.
Because of you have a special case plug-in for everything then the AI is just a natural language processor, and there's no deep learning for the actual functionality.
>Our brains have different processing centers.
Uhh, no they don't? Did you know everything you know now about math when you were born, and are you also incapable of learning new things about math? Because that's how the wolfram plug-in works.
The Chinese Room Argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave.
https://en.wikipedia.org/wiki/Chinese_room
The Chinese Room experiment shows that pattern matching would return correct results for staged inputs, one would not "learn" enough to evaluate an expression not contained in the data.
The Chinese Room thought experiment is not convincing to software engineers generally. It relies heavily on an intuition that looking things up in a book is clearly not "thinking". Software engineers know better: that "looking things up", if you can do it billions and trillions of times a second, can simulate a process which has a close correspondence to reasoning.
Addition and multiplication are trivially implemented using lookup, if you had a machine without arithmetic and only control flow and memory operations. You don't need much more than that for matrix operations, and now you have ChatGPT, a decent simulation of apparent thinking - which is all that is necessary to kill the intuition dead.
What is thinking if not a series of matrix operations? Your brain is just a huge network of neurons and their connections, is this not a (very complex) matrix?
Certainly not in any mathematical sense. The discrete N-dimensional coefficients of a matrix are not modeled by neurons, their connections, and the quantum mechanical electrical accidents that (a) constitute one's wetware, and (b) aren't completely captured by gates and code.
> (C1) Programs are neither constitutive of nor sufficient for minds.
> This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.
---
I personally don't agree with it and believe that there is a flaw in:
> (A2) "Minds have mental contents (semantics)."
> Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
While a person may know what they are thinking, examining the mind from the outside it isn't possible to know what the mind is thinking. I would contend that from the outside of a mind looking at the firings of neurons in a brain it is equally indecipherable to the connections of a neural net.
The only claim that "we know what it is they represent" is done from the privileged position of inside the mind.
I would argue that intelligence is more related to the Kolmogorov complexity exhibited by something.
( David Dowe: Minimum Message Length, Solomonoff-Kolmogorov complexity, intelligence, deep learning... https://youtu.be/jY_FuQbEtVM?t=886 )
That the model of GPT is much smaller than its input.
The Chinese room lookup table is enormously large.
If we attempt to relegate GPT as no better than a Chinese room, we can show that the Chinese room look up table is impossible with the amount of data that GPT has access to as part of its model.
If we say that its not a lookup table but instead an enormously complex interplay of inputs and variables, then the distinction between the room that GPT exists in and our own mind breaks down trying to distinguish which is which.
If we want to switch to consciousness, then possibly the argument can progress from there because GPT doesn't have any state once it is run (ChatGPT maintains state by feeding its output back into itself and then summarizing it when it runs out of space). However, in doing this we've separated consciousness and intelligence which means that the Chinese room shouldn't be applied as an intelligence test but rather a consciousness test.
Are GPT 3 and 4 conscious? I'll certainly agree that's a "no". Will some future GPT be conscious and if so, how do we test for it? For that matter, how do we test for consciousness for another entity that we're conversing with (and its not just Homer with a drinking bird tapping 'suggested replies' in Teams ( https://support.microsoft.com/en-gb/office/use-suggested-rep... ))?
Depends on what the topic of understanding is. In this case it's actually token relationships, right? It does know that very, very well. And there's a lot (.. potentially, hah) that we can do with token relationships.
By itself it's unlikely to ever be knowledge of course.. i see it more akin to NLP than knowledge. Which is to say, a general purpose language parsing tool which we can hand the result to something else. A conversational API, if you will, but we'll still need layers to actually run logic. To know math if you will.
Disclaimer: I know very little on the subject. Pure speculation.
The question is what happens when you go multimodal (which these things can do) and GPT(N+1) learns the associations between words and images/video, as well as the relationships between successive frames of video, at what point does it become unreasonable to claim that it doesn't "understand" something? How good at general-purpose predicting does an AI have to be in order for people to accept that it obviously has an internal model of things and is capable of abstractions?
(Assuming that this happens, of course. Diminishing returns could make scaling infeasible past some point, for instance.)
And additionally, whether our memory and long term learning - and even our goal-choosing - is fundamentally different from an indexed storage of strings of tokens that can be brought back into short-term context when “triggered” by their embedding-similarity to the current context.
I definitely have that question too. I view us as big LLMs.
But, even if we drop that interesting edge case i suspect we can make something very useful with the primitive that LLMs offer.. in the calculator example. ChainLang and co seem a really interesting tool for LLMs.
It matters when you want a correct answer and if it has no way to confirm correctness then you have a problem. The famous goof from chatgpt that I bring up: someone asked chatgpt what the differences were between cow eggs and chicken eggs, instead of answering based on knowledge, it listed out plausible differences based on what it knew about a cow minus the fact cows dont lay eggs.
GPT-4 gets this one now, just as it does the "diamond in a teacup" question. I asked about "pig eggs" instead of "cow eggs" in case it had memorized a public post about cow eggs.
---
Pig eggs and chicken eggs differ fundamentally, as pig eggs are not laid by pigs, while chicken eggs are laid by chickens. Let's clarify the differences:
Chicken eggs: These eggs are laid by female chickens (hens) and are a common food source for humans. They have a hard, calcium-based shell and contain the yolk, egg white, and other nutrients needed for a developing embryo. Chicken eggs are typically eaten for their nutritional content and can be cooked in various ways, such as boiled, fried, or scrambled.
Pig eggs: Pigs, being mammals, do not lay eggs like birds do. Instead, they reproduce through internal fertilization, and their offspring develop inside the mother's womb. When referring to "pig eggs," it's likely in reference to pig oocytes or ovum, which are the female reproductive cells involved in mammalian reproduction. These cells are microscopic and not something that can be consumed like a chicken egg.
In summary, chicken eggs are laid by hens and commonly consumed as food, while pig eggs (or more accurately, pig oocytes) are the female reproductive cells involved in pig reproduction and are not something that can be eaten.
Many of the answers ChatGPT gives would also what a human with average knowledge would give if you point a gun to their head and demand an answer.
My gut feeling is that global long term memory and what we call hallucinations right now might actually be the next step to get closer to general intelligence.
I’m not sure we have the right reinforcement models yet tho, as counter intuitive as it might sound I don’t think that a reinforcement model that is based on correctness only is what we need.
Humans make mistakes all the time, and we bullshit all the time. If anything I think that ChatGPT scares people not because it gets things right but because how confidently it gets things wrong which is something we do all the time.
Older “chat bots” and other NLP things like Watson seemed to be essentially glorified search engines they’ll either give you a correct answer or they won’t answer at all.
So they felt nothing more than a tool not unlike an encyclopedia. ChatGPT will produce an answer under most circumstances but it won’t be perfect and this is what people seem to anthropomorphize with the most.
This is on par with arguing that automobiles will never amount to anything beyond an aristocratic toy because they can't navigate trails on their own like a horse can, can't feed like a horse can, and couldn't even go very far anyway without something breaking.
> There is no such thing as cow eggs. Cows do not lay eggs. Chickens lay eggs that are commonly consumed by humans. The main difference between chicken eggs and other types of eggs is the way the chicken has been raised, treated, and fed. Different farming practices can affect the nutritional content of eggs12.
Something my toddler would probably say as well. Does not mean my toddler has no intelligence - just not wired up to do that yet. Just as humans grow and mature through time, further iterations of AI models will as well
These are fixable as the model is improved, and learns which information to discard. The fact that gpt4 is already so less error prone that gpt3 shows the speed at which this is happening
This is a poor analogy. A calculator automates a completely deterministic operation. You don't really verify that a calculator's output is correct, do you? So, there is no question of intelligence. It is a "dumb" computer by definition.
It matters insofar as determining whether something is intelligent and/or sentient, which is critical to determining whether this "AI" should be conferred human rights or not.
In some sense, I'm not sure intelligent/sentient is all that important when separated from what we think requires intelligence/sentience. I think we attribute less and less to sentience these days and its feels like its less part of the conversation as a result.
Rights haven't really been core to the discussion as I've seen it, and in fact it's the first time I've seen it in months. It's a fair discussion but not the one that has been the focus around whether these models are intelligent or not, e.g. Chomksy's article in the NYT and the MS analysis of gpt-4, or most discussions here for that matter.
All of the nerds complaining about GPT-4 not being perfect aren’t talking about it from the perspective of “conferring it human rights”. It’s all about implying that it not being [insert nebulous word like “Intelligent” or “creative”] somehow makes it useless or a gimmick.
I haven't seen anyone call these LLMs "useless" or "gimmicks." What I have seen is pushback against calling them general AIs or even "intelligence" in general. LLMs are not "intelligent." They do not reason by any reasonable interpretation of the word, and it is certainly an unfounded leap to suggest they "reason" the same way humans do. Especially considering we don't really know how humans reason.
I'm not, we are surrounded by intelligent peers who are not humans and thus do not enjoy human rights (they do enjoy animal rights).
Eventually, after we move from "AI" where we are now to actual artificial intelligence, we'll have to figure out what rights should be conferred if any.
You know perfectly well what he's getting at. Call them Common Rights or Sentience Rights or whatever if the word Human is really causing you confusion. FFS is that the best response you have.
Define machine rights and grant appropriate rights to qualifying machines… whatever those rights are. Like, what 40 hour workweek or right to be upgraded? Right to free output even if not based on any evidence?
We created machine because nobody wants 24 work-hours. Because we want more productivity.
If we gave machines some rights who will do the above for us?
> Does a calculator truly understand math when it spits out a correct answer?
I doesn't seem implausible that some of the first civilians who were exposed to calculators could have been convinced that they were capable of intelligent thought.
And 60s/70s science fiction is full of stuff that they imagined computers would do. Like asking questions and receiving answers that require both inference of facts and deduction like "computer, tell me what happened to this planet".
Eliza is the apt analogy. It's transparently just some if statements substituting phrases into the input, but laypeople that don't understand how it works read into it way more deeply.
Chatgpt is literally just a scaled up version of this. And there's been some kind of eternal September of people who don't understand how a computer works believing all sorts of stuff about it.
> For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind's greatest achievements have come about by talking, and its greatest failures by not talking. It doesn't have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.
We can see with our own minds and that of animals that there is something greater that emerges with the additional size and complexity of the mind that wasn't there in simpler approaches.
Is not not unreasonable to consider that between Eliza and GPT-4 that something greater has emerged that is able to maintain a consistent world model rather than the just playing with words.
Weizenbaum took a "short cut" for the world model by going down the path of a Rogerian psychotherapy which allowed him to intentionally avoid the need for a world model in order to work with the words that are fed in.
GPT-3 and even more so, GPT-4 has a world model that it is able to work with and interact with.
If we are going to call GPT "just an advanced chat bot" then I would contend it is equally appropriate to call a human "just an advanced sea squirt."
Are people not just a scaled up version of chat GPT? It's not like there's some magical substance floating around in our brains that's responsible for consciousness or intelligence or anything else. It's all just more or less deterministic chemistry.
I think this is the key question that hasn't been kicked around enough through all these discussions about AI. Whether or not ChatGPT is sentient or intelligent is kind of boring, and obviously the answer is currently no.
But are humans just slightly more advanced, chemical-based AI? I don't know. Certainly through the Internet, they seem like it. Go to Reddit and look at the comment section. Especially in political discussions. I'm not convinced the "humans" posting there are not just the dumb output of language models--certainly not much more advanced than ChatGPT. When you [think you] have an opinion about something, how do you know it's actually an original thought, and not just the algorithmic output of your brain's many years of "model training". The word I type next in this comment may, when you peel back all the superstition about "souls" and "free will", simply be my language model's nextWord() function. What is original art? If I paint a picture or compose music, it's not original. It's based on my many years of observing the world, looking at other art, listening to other music. What hubris to think that just because it leapt from human fingers onto a canvas that it's somehow imbued with originality!
Human mind as prediction machine is actually pretty popular theory. See for example “Surfing Uncertainty. Prediction, Action, and the Embodied Cognition” by Andy Clark
> Until now scientists believed that our brain processes the stimuli received from the environment from the “bottom-up”, that is, when we hear someone speak, the auditory cortex of the brain processes the sound first and then activates other areas that are responsible for speech comprehension.
> However, more and more neuroscientists seem to support the theory that the brain ultimately analyzes the external stimuli from the “top-down”, which makes the brain a kind of “prediction machine”.
> As reported by U.S. researchers, our brain anticipates constantly in order to be able to respond lightning-fast and accurately to anything that is going to happen. For example, it is able to predict words and sounds from the context. From the phrase “grass is…” we can easily predict the continuation – it is probably the word “green”.
> > “Our findings show that the brain of both the speaker and the listener uses the process of language prediction. This results in similar brain patterns in both interlocutors,” said the study’s senior author Dr. Suzanne Dikker from the Department of Psychology, University of New York. “This happens even before the speaker utters the phrase he is thinking“.
Again, human brains are not token-prediction transformers. You can see one part of cognition and see that it has a somewhat analogous relation, but that's only one part of human cognition. Labeling a brain a scaled up GPT is mistaking a model for reality, and it's also mistaking a part for a whole.
> Does a calculator truly understand math when it spits out a correct answer? Of course not. And it doesn't matter.
It starts to matter the moment we teach people at large to call calculators "Artificial Math Professor", projecting the image that they do indeed comprehend mathematics at a higher level.
It starts to matter when I meet my aunt (a psychiatrist no less) for lunch and she is absolutely convinced that there is a conscious, intelligent entity living inside her calculator, and she wants to debate the ethics of this with me, getting angry and upset when I doubt the premise.
It starts to matter when I turn on the TV later that day and see our countries minister of education in a panel discussion, debating the future of our schools when AMPs will teach math to our kids instead of flawed, human teachers.
It starts to matter when our government starts debating new laws about "spreading dis-calculation" on social media, convinced that we can just let AMPs read and comprehend all those maths posts in real-time, reporting them to higher authority for posting wrong-calc, deciding on human lives in full-auto-mode.
It starts to matter when I discuss the above with my mother (a lawyer) over lunch and she doesn't understand what the problem is either, convinced that we actually have flawless AMPs that can do all of this, baffled that I, supposedly a "tech guy", am opposed to legislative decisions ushering in new, fancy hype tech.
It starts to matter when this is brought up on places like HN and people with a level of actual technological knowledge, possibly involved in some of this, fail to see the consequences and instead focus on high concept, philosophical debates about what really constitutes a "math professor", constantly moving goal posts around, hooked on their their own hype, rather than marveling the wondrous new technologies of the last years, but for what they actually are.
For people at large, a computer has always been a magic black box that they anthropomorphized anyway. When marketing starts using fancy SciFi words, re-defining their actual meaning to stir up hype, people happily go with the literal meaning of what they are told (as intended; hence the hype). They are absolutely willing to believe that we indeed trapped a Math Professor in a box, Max Headroom style.
Some of the people happen to be in places were they can make decisions, and that's when it really starts to matter.
People didn't call the printing press, calculators, computers, or pagerank "AI", but they all mechanize some aspect of "intelligence". (Though Bool did call his algebra the "laws of thought".)
How will its impact rank within that set?
(I think it's fair to include future improvement; but not new "AI" inventions, since the term "AI" is vague, little more precise than technology for processing information in new ways).
> Does a calculator truly understand math when it spits out a correct answer? Of course not.
Yes, but the difference is that nobody's trying to argue that the calculator is an intelligent being. I've seen people here on HN who are convinced that ChatGPT is a sentient lifeform which deserves its own rights.
Other big difference is that the calculator doesn't ever make things up like ChatGPT does.
Reminds me of Dijkstra's line: "the question of whether a computer can think is no more interesting than the question of whether a submarine can swim".
Amen to that and I'd even go a step further: The fact that it says untrue and invented things with much confidence is unfortunate but doesn't prevent its usefulness. People tell BS all the time and we deal with it.
No - we also don't. It depends on the context, and it depends on the stakes. Some contexts are more "bullshit" tolerant, others are critical.
Conversely to your statement, " """we""" do not condone the error of the politician, of the manager, of the doctor, of the lawyer, of the worker - and we want good warranties for any entity responsible - repeat, responsible".
Which, incidentally, is a reason why Decision Support Systems are Decision /Support/ Systems.
Of course it does. But that's the only thing it understands and it can never understand anything else. It has IMHO the very smallest possible kind of frozen intelligence.
Yes, insofar as we know exactly how the adder and shift registers are implemented we can safely say a calculator "understands" the math its limited button set can be asked to perform. We could certainly replace those circuits with a GPT-like set of internals that would be unreliable as Stallman means, perhaps more reliable than text generation, but fundamentally just as untrustable.
I really like pencils as well. A few years ago I started a development journal mainly to stay motivated and so that I could look back to see how far I had come. I quickly found out that the whole process was a lot more enjoyable with nice paper and pencils.
I use several pencils and like them all really. I currently use all four Blackwings (black, pearl, natural and 602) as well as as a Tombow MONO 2H if I want fine lines, and I occassionally use the Mitsubishi k9800HB which is nice but not my favorite.
Now I am going to try the Mitsubishi Hi-Uni 10B and even the KH-20 pencil sharpener. I like the manual long point sharpener from Blackwing and it is what I use daily. Overall, for general use, I like either the blackwing natural or the 602. The Black is very dark and makes normal pencil look anemic but you do have to sharpen frequently, although sometimes pausing to sharpen helps me clarify my thoughts, so I don't really mind. The pearl gives you similar quality, but is slightly harder so it last longer between sharpenings. Also a very good choice.
How well do the pages hold up over time? When I look back at my notebooks from ten and twenty years ago, the pencil has not held up well. I think the rubbing of the pages from use and moving them around has really lightened the marks.
My journals are net yet that old, 3 to 4 years, so probably too early to say. Thus far, they look good, and with a really dark pencil, even with a bit of fading it doesn't really matter.
I have used asyncio through aiohttp, and I have been pretty happy with it, but I also started with it from the beginning, so that probably made things a little easier.
My setup is a bunch of microservices that each run an aiohttp web server based api for calls from the browser where communications between services are done async using rabbitmq and a hand rolled pub/sub setup. Almost all calls are non-blocking, except for calls to Neo4j (sadly, they block, but Neo4j is fast, so its not really a problem.)
With an async api I like the fact that I can make very fast https replies to the browser while queing the resulting long running job and then responding back to the Vue based SPA client over a web socket connection. This gives the interface a really snappy feel.
But Complex? Oh yes.
But the upside is that it is also a very flexible architecture, and I like the code isolation that you get with microservices. Nevertheless, more than once I have thought about whether I would choose it all again knowing what I know now. Maybe a monolithic flask app would have been a lot easier if less sexy. But where's the fun in that?
> With an async api I like the fact that I can make very fast https replies to the browser while queing the resulting long running job and then responding back to the Vue based SPA client over a web socket connection. This gives the interface a really snappy feel.
How does this compare to doing the same with eg. Django Channels (or other ASGI-aware frameworks)?
I have yet to find a use case compelling enough to dive into async in Python (doesn't help that I also work in JS and Go so I just turn to them for in cases where I could maybe use asyncio). This is not to say it's useless, just that I'm still searching for a problem this is the best solution for.
I have never used Django, so I cannot say but if Channels handles the websocket connection back to the client, then I assume you could send back a quick 200 notification as the http response to the user and then send the real results later over the socket. I think these would be equivalent.
I have also never used Go, but I am comfortable saying that Python async is much easier to user than JS async. I find JS to be as frustrating as it is unavoidable.
Using aiohttp as on api is not bad at all. Once you have the event loop up and running, its a lot like writing sync code. Someone else made a comment about the fact that Python has too many ways to do async because everything keeps evolving so fast. I this this is true. The first time I ever looked at async on Python it was so nasty I basically gave up and reconsided Flask but came back around later because I so despised the idea of blocking by server that I was compelled to give it another go. The next time around was a lot easier because the libraries were so much improved.
I think a lot of people think that async Python is harder than it is (now).
> Maybe a monolithic flask app would have been a lot easier if less sexy. But where's the fun in that?
Not to sound snarky, but the fun would be in being able to solve business problems without fighting against a complex system. Microservices can definitely make sense but if you need them and know you need them. If it's just for fun and it's not a hobby, and you're being paid to maintain someone else's system then it's definitely worth going with a simpler system.
I share your sentiment and have been using aiohttp for five years and pretty happy with it. My current project is a web service with blocking SQL server backend, so I tend to do loop.run_in_executor for every DB.execute statement. But now I’m considering just running a set of light aio streams sub processes with a simple bespoke protocol that takes a SQL statement and returns the result json encoded to move away from threads.
This and the fact that they have your cc and shipping already on file, which makes things a lot easier. More than once I have found a product on some site and then purchased it from Amazon just because it is so much easier.
I read your comment and immediately thought 'uh-oh'. If there is some new mosquito in California, they'll be in the southeast in no time. Too late it appears. It's already old news. I guess no one noticed around here as we have so many insects anyways. No point in complaining to the authorities.
I think the point was that quite a bit of technolgical progess happened in the world prior to diversity initiatives. The reference to white nerds likely places the comment in an historical US perspective, possibly western European. Advanced technologies developed in other non-diverse cultures as well. China comes to mind in particular. But I feel like you knew that already.
"Feel free to let us know what that might be."
If we have sarcasm tags, maybe we should have snark tags as well?
It seems there is a regular cross-section between technology interests and firearms interest. In most groups of the first type I find there is small group of the second, perhaps owing to the interesting problems to be solved in long range shooting.
I have lots of guns of every sort, but I collect old Marlins. I especially like the ones from the 50s in particular. They were never perfect, but the difference in factory quality from the old days vs modern stuff (especially the 70s and afterward) is amazing.
Right on. I still haven’t tried a Marlin. My first lever action was an Ithaca 49R which is from that era and is a great piece of history. Completely agree on the difference in factory quality.
I now use aws to register my domains. I started with GoDaddy. I remember searching for some names (on GoDaddy) and found something that was reported as available. I went to register it and GoDaddy tells me that it is no longer available, but that they have some service that I can pay for where they will help me to negotiate/handle the purchase from the owner. I called the customer service to ask about this. I found out that it was GoDaddy itself that had purchased it right up from under me. That was the last time I ever dealt with GoDaddy.
This same thing happened to me. I was trying to register an obscure domain name (never before registered), left it for several hours, then came back to find it snapped up. The chances this was done without inside information on my lookups are surely slim. And with all the personal stories flying round like yours, I'm also inclined to believe a GoDaddy employee (or GoDaddy themselves!) sniped it.
The situation is SC is similar. Many insurers have packed up for the same reasons they left Florida. The same roof repair scams have been running here as well. What I don't understand about the scam is how the insurance adjuster comes out and agrees that a new roof is warranted when so frequently that is not the case. If the adjusters kept the lid on things none of this would be possible I think. It's baffling, but in the end is bad for everyone (with the possible exceptions of the roofing companies) My house is not in a flood zone, but we definitely need insurance. (we actually have flood insurance despite not being in a flood zone.)