Hacker Newsnew | past | comments | ask | show | jobs | submit | fabian2k's commentslogin

Where are they supposed to put all that gas and oil if they can't transport it? I don't think they have much choice here.

And as far as I understand, helium is a byproduct of the extraction, so they can't choose to keep only the helium.


However Qatar stopped production before the straits were officially closed and their stated reason is "due to military attacks", also Russian or Chinese ships can pass

There is no such thing as "officially closed". The moment people start shooting there, driving a ship across becomes dangerous. This was an absolutely predictable consequence of the attacks on Iran, you didn't need to wait until several tankers were burning to know these attacks were likely to happen and the strait would become essentially too risky to pass.

Back then there were only two ships attacked in the straits, and one was an Iranian shadow fleet ship. I am not sure that is "closing the straits" in any shape or form

So if there's an active shooter on the one alley to your workplace you should still be at work in time, right? :)

Or let's make the analogy clearer: if your Uber driver cancels the ride because there's an active shooter on the only road between him and you, it's their fault not the shooter's?


no, but if two ships were hit, while one clearly by mistake, it is very early to say the straits are going to be closed as opposed to incorrect targeting

your analogies have went past me though, generally although a common misconception, countries are not people and wars are not comparable to crime


Oh, well, if it was by mistake...

The active shooter is shooting a specific group of people who don't include you. Will you walk past?

That's precisely how you close the straits; by making everyone scared to go through.

You don't even have to scare everyone. You just have to scare the insurers. Without insurance ships won't sail. The exposure is huge, so a small blip in risk makes all the modeling go kerplooie. Traffic stopped when the insurers said drop the anchors.

To restore traffic, we need that risk to return to previous levels, which requires diplomacy and trust. I don't expect resolution any time soon.


Impeachment, and then we could get there. It's not impossible.

With JD Vance things will go even worse

From all I've seen, he actually argued against this war prior to paying lip service to it. He'd be an improvement.

I thought Vance was the actual isolationist America first guy? Not Trump kind who's opinion changes based which authoritarian he last had a phone call with.

In this specific case maybe Vance is least worst option.


Vance called Trump "America's Hitler", then ran as his VP.

He's a windsock.


As the houthis have long demonstrated, you can screw up shipping from the coast

I'm guessing you watched the Hegseth interview?

--- Hegseth: “The only thing prohibiting transit in [Hormuz] right now is Iran shooting at shipping.”

“It is open for transit should Iran not do that” ---

Oh really? I thought it was because Mercury was in retrograde.

I guess if even Mr. Hegseth is admitting that transit is effectively prohibited in the Strait, he must actually be lying and part of the deep state.


Not just dangerous but uninsured...

Are Russian or Chinese ships actually passing? Junior just released a decree saying not one liter of oil will pass. It didn't have an asterisk allowing Russian or Chinese ships.

I also find it funny that we just decided to allow Russia to pad its coffers by temporarily lifting sanctions on sale of Russian oil. Sorry Ukraine!


The strait is now mined at least partially. Country of origin doesn't matter when there are mines in the water.

We really are overdue for mines with IFF and can inert themselves temporarily for blue ships.

The problem is systems like that have a failure rate.

Self deactivating land mines exist - and sometimes fail to do this (3/100 was the rate I heard a few years ago).

Same problem with cluster munitions: it's not how they work. It's that a bunch of the bomblets fail to work, then leave UXO around which explodes a child's hand later.


This is going to be an environmental disaster.

Only if there's no diplomatic resolution - however unlikely.

It is disconcerting to see that quite a few of the well-known billionaires seem to have just outright insane beliefs. And those are people with real power and the ability to influence events on a larger scale.

I would say it is natural that humans with so much power go crazy. What is not natural is allowing them to have that power in the first place. If a society allows that, it deserves anything that could happen to it, whether it's Armageddon, climate change, pollution, idiocracy, or whatever.

How does it work with dictators? I suggest it's a spectrum: the more powerful you are, the more you can surround yourself with yes-men. Of course there are a lot of different people, there's probably very grounded dictators and billionaires too, you probably don't hear much about them.

It doesn't seem that supprising. You need a certain level of narcissism/sociopathy to have the drive to become a billionaire in the first place.

Europe is a big place, but my understanding is that the US is the outlier here and Europe is relatively similar in this regard.

The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.

There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.


I live in France and I still have to write a check here and there. Very minor, but still present.

Receiving a check however is even rarer.


It feels a bit like in a Road Runner cartoon. We already ran well past the cliff, just haven't noticed yet that we should be falling down.

We'll see if the markets are still too optimistic here or not. I don't see how this will resolve quickly, so the Strait of Hormuz will likely remain essentially closed for quite a bit longer. I don't see any escort plans by US military ships as working, if Iranian troops actively try to disrupt this.

It's the insurance companies who are saying "no way". No insurance, no shipping.

Trump was mostly a known quantity when elected a second time. And on foreign policy it was clear that he would be at least erratic and unpredictable. This is not an unexpected result of voting for Trump.

People voted for a vindictive and petty criminal that doesn't care about rules or laws. This is the result of that.


I, agree. It's impossible for me to afford any cover for someone who says they thought they were voting for a guy who would not start any wars.

I can't think of a time in my life when the choice was any more clear.


The last time I looked into it my impression was that disabling the JIT in PostgreSQL was the better default choice. I had a massive slowdown in some queries, and that doesn't seem to be an entirely unusual experience. It does not seem worth it to me to add such a large variability to query performance by default. The JIT seemed like something that could be useful if you benchmark the effect on your actual queries, but not as a default for everyone.


That is quite strange, given that big boys RDMS (Oracle, SQL Server, DB2, Informix,...) all have JIT capabilities for several decades now.


The big boys all cache query plans so the amount it time it take to compile is not really a concern.


Postgres caches query plans too, the problem is you can only cache what you can share, and if your planner works well, you can share very little, there can be a lot of unique plans even for the same query


No it cannot cache query plans between processes (connections) and the only way it can cache in the same process in the same connection is by the client manually preparing it, this was how the big boys did it 30 years ago, not anymore.

Was common guidance back in the day to use stored procedures for all application access code because they where cached in MSSQL (which PG doesn't even do). Then around 2000 it started caching based on statement text and that became much less important.

You would only used prepared statements if doing a bunch of inserts in a loop or something and it has a very small benefit now days only because its not sending the same text over the network over and over and hashing to lookup plan.


I didn't say it can cache between processes. The problem is not caching between processes, it's that caching itself is not very useful, because the planner creates different plans for different input parameters of the same query in the general case. So you can reliably cache plans only for the same sets of parameters. Or you can cache generic plans, which Postgres already does as well (and sharing that cache won't solve much of the problem too).


Other databases cache plans and have for years because it's very useful, many (most?) apps run many of the same statement with differing parameters, its a big win. They do this without the client having to figure out the statement matching logic like your various PG Orms and connection poolers try and do.

They also do things like auto parameterization if the statement doesn't have them and parameter sniffing to make multiple different plans based on different values where it makes sense.

https://learn.microsoft.com/en-us/sql/relational-databases/q...

You can also get this, add HINTs to control this behavior if you don't like it or its causing a problem in production, crazy I know.

https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...

PG is extremely primitive compared to these other systems in this area, and it has to be since it doesn't cache anything unless specifically instructed to for a single connection.


You make some unsubstantiated claims here. I assure you that it isn't as simple as you claim. And what Postgres does here is (mostly) the right thing, you can't do much better. You simply can't decide what plan you need to use based on the query and its parameters alone, unless you already cached that plan for those parameters (and even in that case you need to watch out for possible dramatic changes in statistics). Prepared statements != cached execution plans.


Ah yes so Microsoft and Oracle do these things for no good reason, you are the one making unsubstantiated claims such as "you can't do much better". And "You simply can't decide what plan you need to use based on the query and its parameters alone" which is mostly what those systems do (along with statistics). If you bothered to read what I linked you could see exactly how they are doing it.

I never said it was simple, in fact I said how primitive PG is compared to the "big boys" because they put huge effort into making their systems fast back in the TPS wars of the early 2000's on much slower hardware.

>Prepared statements != cached execution plans

Thats exactly what a prepared statement is:

https://en.wikipedia.org/wiki/Prepared_statement


There are reasons for that, it's useful in a very narrow set of situations. Postgres cached plans exist for the same reason. If you're claiming Oracle and MSSQL do _much_ better in this area - that's what I call unsubstantiated. From what you write further it's pretty clear you don't have a lot of understanding what happens under the hood. And no, prepared statements are not what you read in Wikipedia. Not in all databases anyway. Go read it somewhere else.


>There are reasons for that, it's useful in a very narrow set of situations.

So narrow its enabled by default for all statements from the "big boy" commercial RDBMS's...

https://www.ibm.com/docs/en/i/7.4.0?topic=overview-plan-cach...

https://docs.oracle.com/en/database/oracle/oracle-database/1...

https://learn.microsoft.com/en-us/sql/relational-databases/p...

https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c...

>Postgres cached plans exist for the same reason.

Postgresql doesn't cache plans unless the client explicitly sends commands to do so. Applications cannot take advantage of this unless they keep connections open and reuse them in a pool and they must mange this themselves. The plan has to be planned for every separate connection/process rather than a single cached planed increasing server memory costs which are plan cache size X number of connections.

It has no "reason" to cache plans the client must do this using its "reasons".

>If you're claiming Oracle and MSSQL do _much_ better in this area - that's what I call unsubstantiated.

You are making all sorts of claims without nary a link to back it up. Are you suggestion PG does better than MSSQL, Oracle and DB2 in planning while be constrained to replan on every single statement? The PG planner is specifically kept simple so that it is fast at its job, not thorough or it would adversely effect execution time more than it already does, this is well documented and always a concern when new features are proposed for it.

>From what you write further it's pretty clear you don't have a lot of understanding what happens under the hood.

Sticks and stones, is that all you have how about something substantial.

> And no, prepared statements are not what you read in Wikipedia. Not in all databases anyway.

Ok Mr. Unsubstantiated are we talking about PG or not? What does one use prepared statements for in PG hmmm, you know the thing you call the PG plan cache? How about something besides your claim that prepared statements are not in fact plan caches? Are you talking about completely different DB systems? How about you substantiate that?


https://www.postgresql.org/docs/current/runtime-config-query...

and then

https://www.postgresql.org/docs/current/sql-prepare.html

Read carefully about "plan_cache_mode" and how it works (and its default settings). Sorry, that's my last message in this thread, and I'm still here just for educational purposes, because what you're talking about is in fact a common misconception. If you read it carefully, you'll see that generic plans do not require any "explicit commands", Postgres executes a query 5 times in custom mode, then tries a generic one, if it worked (not much worse than an average of 5 custom plans), the plan is cached. You can turn it off though. And I'd recommend to turn it off for most cases, because it's a pretty bad heuristics. Nevertheless, for some (pretty narrow set of) cases it's useful.

So, Mr Big Boy, now we can get to what a prepared statement in Postgres is. Prepared statements are cached in a session, but if that statement was cached in custom mode, it won't contain a plan. When Postgres receives a prepared statement in custom mode, it will just skip parsing, that's it. The query will still be planned, because custom plans rely on input parameters. If we run it in generic mode, then the plan is cached.


I think you should read carefully, this only applies to prepared statements within the same session, which is exactly what I have been saying. There is no global cache, and if you reset the session it's gone.

This controls whether prepared statements even use a cached plan at all. Other database can do this with hints and they can skip parsing by using stored procedures which are basically globally named prepared statements that the client can call without preparing a temporary one or they can do prepared but again this is typically a waste of time because parsing enough to match existing plans is fast (soft vs hard parse in Oracle speak). They have many more options with more powerful caching abilities that all clients can share across sessions.

The only time PG "automatically" caches the plan is when it implicitly prepares the plan within a PL/pgsql statement like doing a insert loop inside a function, its still is only for the current session. This is just part of the planning process in other databases that cache everything all the time globally.

You don't seem to understand that most other commercial "big-boy" RDBMS cache plans across sessions and that nothing has to be done for them to reuse between completely different connections with differing parameters and can still have specialized versions based on these parameters values vs a single generic plan.

At least now you admit prepared statements are in-fact a plan cache, contradicting your other statements, and seem to make a gotcha out of an option an option to disable that cache.

You can see various discussions on pg-hackers, here is one where the submitter confirms everything I have said and attempted to add the auto part but not tackle the much harder sharing between sessions part and was shot down, I don't believe much has change in PG around plan caching since this post and even has a guy that worked on DB2 talking about how they did it: https://www.postgresql.org/message-id/flat/8e76d8fc-8b8c-14b...


Sure, but that's not the main issue. If you add a global cache, it will have only a marginal value. There are Postgres extensions / forks with global cache and they are not wildly more efficient. The main issue you still do not understand is for different parameters you _need_ different plans, and caching doesn't help with that. It can help with parsing, sure. Parsing is very fast though, in relation to planning. And you keep conflating "prapared" statements with plan caching. Ok.

>If you add a global cache, it will have only a marginal value

Please substantiate this, again all other major commercial RDBMS's do this and have invested a lot of effort and money into these systems, they would not do something that has marginal value.

Again I went through the era of needing to manually prepare queries in client code when it was the only choice as it is now in PG. It was not a marginal improvement when automatic global caching became available, it was objectively measurable via industry standard benchmarks.

You can also find other post complaining about prepared statement cache memory usage especially when libs and pooler auto prepare, the cache is repeated for every connection, 100 connections equals 100X cache size. Another advantage of a shared cache, this is obvious.

I will leave you with a quote from Bruce Momjian, you know one of the founding members of the PG dev team, in the thread I linked that you didn't seem to read just like the other links I gave you:

"I think everyone agrees on the Desirability of the feature, but the Design is the tricky part."

>The main issue you still do not understand is for different parameters you _need_ different plans, and caching doesn't help with that.

You still don't seem to be grasping what other more advanced systems do here and again don't seem to be reading any of the existing literature I am giving you. These systems will make different plans if they detect its necessary, they have MULTIPLE cached plans of the same statement and you can examine their caches and see stats on their usage.

These systems also have hints that let you disable, force a single generic, tell it how you want to evaluate specific parameters for unknown values, specific hard coded values etc. if you want to override their default behavior that uses statistics and heuristic to make a determination of which plan to use.

Please I beg you read what a modern commercial DB can do here and stop saying it doesn't help or can't be done, here is a direct link: https://learn.microsoft.com/en-us/sql/relational-databases/p...

>And you keep conflating "prapared" statements with plan caching.

Again we are talking about PG and the only way PG caches a plan is using prepared statements, in PG prepared statements and plan caching are the same thing, there is no other choice.

From your own link trying to gotcha me on PG plan caching config, first sentence of plan_cache_mode: "Prepared statements (either explicitly prepared or implicitly generated, for example by PL/pgSQL) can be executed using custom or generic plans."

The only other things a prepared statement does is skip parsing, which is another part of caching, and reduce network traffic from client to server. These things can be done with stored procedures in systems that have global caches and are shared across all connections and these systems still support the very rare situation of using a prepared statement, its almost vestigial now days.

Here is Microsoft Guidance on prepared statements in MSSQL now days:

"In SQL Server, the prepare/execute model has no significant performance advantage over direct execution, because of the way SQL Server reuses execution plans. SQL Server has efficient algorithms for matching current Transact-SQL statements with execution plans that are generated for prior executions of the same Transact-SQL statement. If an application executes a Transact-SQL statement with parameter markers multiple times, SQL Server will reuse the execution plan from the first execution for the second and subsequent executions (unless the plan ages from the plan cache)."

https://learn.microsoft.com/en-us/sql/relational-databases/q...


If you think I'm trying to "gotcha" you, you're mistaken. I'm past time I would care about that. It was simply a (apparently failed) education opportunity. Be well.

>So, Mr Big Boy, now we can get to what a prepared statement in Postgres is.

Yeah not a gotcha at all mr teacher. I think you should stop posting low effort responses and examine your own opportunities for education that may have been missed here.

Lets get this straight prepared statements should not be conflated with caching, yet the only way to cache a plan and avoid a full parse is to use a prepared statement and it is by far the biggest reason to use it and why many poolers and libraries try to prepare statements.

Do you realize how ridiculous this is, here is PG's own docs on the purpose of preparing:

"Prepared statements potentially have the largest performance advantage when a single session is being used to execute a large number of similar statements. The performance difference will be particularly significant if the statements are complex to plan or rewrite"

"Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes or their planner statistics have been updated since the previous use of the prepared statement."

The MAIN POINT of preparing is what I am conflating with it, yes...

If PG cached plans automatically and globally then settings like constraint_exclusion and enable_partition_pruning would not need to exist or at least be on by default because the added overhead of the optimizations during planning would be meaningless.

Seriously this whole thread is Brandolini's law in action you obviously can't articulate how PG is better because it does not have a global plan cache and act like I don't know how PG works? Get real buddy.

Are you going to post another couple sentences with no content or are you done here?


You can't get a plan cache without a prepared statement, but you can get a prepared statement without a plan cache. It's not the same thing, and in most cases in Postgres prepared statements _do_not_ give you plan caching, because they are created for custom plans. "Custom plan" is a misnomer - having a "custom plan" means the query is replanned on each execution. It's a common misconception - even a sizeable portion of articles you can find on the internet miss this. But if you have a good reading comprehension, you can read, and, possibly, understand, this:

> A prepared statement can be executed with either a generic plan or a custom plan. A generic plan is the same across all executions, while a custom plan is generated for a specific execution using the parameter values given in that call.

here https://www.postgresql.org/docs/current/sql-prepare.html

You're also mixing up parsing and planning for some reason. Query parsing costs like 1/100 of planning, it's not nothing, but pretty close to it.

Even though you're just a rude nobody, it still may be useful for others, who may read this stupid conversation...


>You can't get a plan cache without a prepared statement, but you can get a prepared statement without a plan cache.

What is the purpose of a prepared statement without a plan cache? I thought parsing was a non issue? All thats left is a little extra network traffic savings.

I will for a second time quote the PG documentation that you linked btw of what the MAIN POINT of a prepared statement is according to the maintainers, I am not sure why I have to repeat this again:

"Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes or their planner statistics have been updated since the previous use of the prepared statement.”

I am not sure what point you are trying to make other than worming your way out of your previous statements. Prepared statements are in fact plan caches and it is their MAIN purpose according the PG’s own documentation, you haven't given any other purpose for their existence, I gave the other two, one of which you dismissed, the third is not even listed in the PG docs and is also minor.

> It's not the same thing, and in most cases in Postgres prepared statements _do_not_ give you plan caching, because they are created for custom plans.

The default setting is auto which will cache the plan if the generic plan cost is similar to a custom one based on the 5 run heuristic. This is going to be most of the time on repeated simple statements that make up the bulk of application queries and why other database do this all the time globally without calling prepare. It is a large savings, not sure why you think this would not occur regularly and if you have any data to back this up I am sure everyone would like to see it, it would upset conventional thinking in other major commercial RDBMS’s with hard won gain over many years.

>You're also mixing up parsing and planning for some reason.

No I am not, you are obviously not comprehending what I said and cannot read the documentation I quoted which I had to repeat a second time here. I am not sure why you think I am mixing them up I was only trying to be gracious and include the other benefit of prepared statement, one of two thats left if it doesn't cache the plan, it avoids parsing which yes has a smaller impact, the third even less.

Also not everyone shares PG terminology, Oracle refers to what you call parsing as a soft parse (syntax check, semantic check) and parsing and planning as a hard parse (rewrite and optimizing, row source generation), you obviously have little experience outside of PG and seem to have a myopic view of what is possible in RDBMS systems and how these terms are used.

>Query parsing costs like 1/100 of planning, it's not nothing, but pretty close to it.

Again what is the point of a prepared statement if skipping parsing is meaningless and planning is not THE MAIN POINT?

>Even though you're just a rude nobody, it still may be useful for others, who may read this stupid conversation…

Further ad hominem and you call me rude, who are you to say this? How about you step off your high horse and learn something mr superior somebody. I was trying to debate in good faith and you insult me with zero substance, yeah this is a stupid conversation...


> then tries a generic one, if it worked (not much worse than an average of 5 custom plans), the plan is cached

Seems like it's not great at detecting this in all cases[1]. That said, I do note that was reproduced on PG16, perhaps they've made improvements since, given the documentation explicitly mentions what you said.

[1]: https://www.michal-drozd.com/en/blog/postgresql-prepared-sta...


That's exactly what I said above - just turn this thing off. The reason is that even if your generic plan is better than 5 custom plans before it, that doesn't guarantee much. With probability high enough to cause troubles, it's just a coincidence, and generic plans in general tend to be very bad (because they use some hardcoded constants instead of statistics for planning).

This behavior is often a source of random latency spikes, when your queries suddenly start misbehaving, and then suddenly stop doing it. If you don't have auto_explain on, it will look like mysterious glitches in production.

The few cases when they are useful are very simple ones, like single table selects by index. They are already fast, and with generic plans you can cut planning time completely. Which is kinda...not much. There are more complicated cases where they are useful, involving Postgres forks like AWS Aurora, which has query plan management subsystem, allowing to store plans directly. Then you can cut planning time for them. But that's a completely different story.


That's not generally correct. Compile-time is a concern for several databases.


Most systems submit many of the same queries over and over again.

Ad-hoc one off queries usually can accept higher initial up-front compile cost because the main results usually take much longer anyway, vs worrying about an extra 100ms of compile.

Maybe it was too strong to say its not a concern at all, but nothing like PG where every single request needs to replan and potentially jit unless the client manually prepares and keeps the connection open.


That's not what this user was talking about.

For example, with a German ID you can provide proof that you are older than 18 without giving up any identifying information. I mean, nobody uses this system at the moment, but it does exist and it works.


Does the German ID system know what you are trying to access? Based on the requestor.


I'm not convinced age restrictions like this are a good idea. But yeah, the non-availability of IDs in the US is a self-inflicted problem.

Another example where this plays a role are voter registration and ID requirements for voting in the US. It is entirely bizarre to me how these discussions just accept it as a law of nature that it's expensive and a lot of effort to get an ID. This is something that could be changed.


When one of the only two political parties does not want everyone to vote (cause they’d lose every election) you get what we got…


You may underestimate the levels of classism and racism in the US. Go on and bring up a conversation about it and you'll eventually get someone talking about how that would be socialism and we can't do that.


RAM increased the most, but also SSD and HDD prices increased significantly. And it seems there are also supply problems, so you can't even be sure if you get the components you want at higher prices.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: