We've been one of 666 repos, and I'm not too happy of having our repo used as advertising space. Some thoughts:
- I'm happy to receive fix-a-typo PRs from human users. In this case the other side demonstrated that they care by putting in a bit of manual effort, and a small PR often paves the way towards larger contributions. I also know that open source beginners get really excited about their first small contributions, and I'm honestly happy to support that.
- In contrast, the marginal effort for bot PRs is ~0. It's very easy to generate a small amount of work for a lot of people, and the nice side effect is that the bot's platform is advertised everywhere. As a maintainer, I have never given consent to this and I have no choice to opt out.
We are very happy users of some GitHub bots, but I feel it needs to be an active adoption decision by the maintainer. If you want to pitch me your service you may send me an unsolicited email, but don't use our public space to advertise your product without asking.
Edit: I don't want to be too harsh to OP here - at least they pointed out a small but valid issue in our case. I very much appreciate their apology at https://news.ycombinator.com/item?id=31210245
I just... think you should reconsider your stance on this. If you made a mistake in a public repo and someone else caught it (via scan of your repo or otherwise), it's a pretty bad look to be anything but grateful at that point, PR benefits for the bot aside.
The problem with scanners is that they usually have a pretty high false positive rate. When automatically opening the PR, they are basically putting the human review part on the maintainer (burdening them with additional and possibly useless work) while also using their repo as advertising space without consent. When the scan goes wrong and has a lot of false positives or it looks like they just got lucky, it's easy for a maintainer to feel like most of the cost was handed to them, while most of the upsides (like QA and brand recognition) are reaped by the bot. When a human opens the PR, you at least know that they valued your time and checked the changes beforehand, even if it's based on the results of the bot and contains the same errors.
Now, if the bot catches an actual error and improves the software, the result is obviously net good and the tad of free advertising is deserved. But it can easily feel like a PR campaign paid for with carelessly annexed maintainer time and in quite a few cases, it simply is.
How high is the false positive rate? I would say even at 80%, the bots at least have found enough possible bugs that worth attentions that wouldn’t be found by human review only
I understand the sentiment but you should be judging the PR, not the source. Ask yourself: would you have happily accepted the same PR that the bot sent if it came from a human?
By all means, I am not against having bots identify themselves properly, my point is that "effort from bot PR is ~0", "it advertises their platform" are simply not the right reasons to judge this situation by.
Sure you would probably treat it differently but isn't it being elitist and harmful to the open source community in general to outright shoot down or discourage any PR from a lesser known or unknown source if it is a good PR? I think we should encourage novices to contribute and we shouldn't be hostile to them so that they can get past the novice phase and become trusted contributors. If a bot produces valid helpful and well formed PRs, why would you discriminate against them completely when they improve your codebase?
> I have never given consent to this and I have no choice to opt out.
You have a public repository on GitHub. You are free to switch it to private, but otherwise this absolutely illogical. No one needs your consent to submit PRs to and highlight a public repository.
This is equivalent to having a website and then getting angry about linking to it, or putting your artwork up for public viewing and then getting angry at someone pointing out a small tear in the fabric.
Actually, now that I think of it, it’s better comparable to someone bringing in a handheld scanner with a company name on it, scanning the artwork and then pointing out the tear.
Which is still totally fine. You’ve given implied consent by making it available to the public. You have decided to make it possible for the public to view it, criticize it and link to it.
Quote Wikipedia,
> Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, *study*, change, and distribute the software and its source code to anyone *and for any purpose*
> Actually, now that I think of it, it’s better comparable to someone bringing in a handheld scanner with a company name on it, scanning the artwork and then pointing out the tear.
No, it's more like somebody sending to your lab, uninvited, an impersonal inspection bot with another company's branding on it, which doesn't only disclose potential issues to you but advertises them across the whole cyberspace.
And in case of OSS this lab may be my tiny garage where me and friends tinker on stuff.
Choosing to make the results of our passion or work free for all to study and use should not come with a liability of having to deal with hordes of such bots.
Only if you establish your lab in a tent on the street and put a sign that says "for public display" on it.
> And in case of OSS this lab may be my tiny garage where me and friends tinker on stuff.
That's not OSS. OSS would be leaving the garage door open, putting your garage on Google Maps and freely allowing anyone to walk in and see what you're doing. That's OSS.
Then getting angry about it is what you and OP are doing.
But that is not up to you to decide in the case of OSS.
Public websites get crawled and indexed hundreds of times per day and sometimes linked to even with criticism. Would you not say this is the same concept?
Outside of my system vs. inside of my system. PRs count as the latter in my view. It's filing a task, for me to do, so it should not even seem like it is coming from a non-individual.
> No, it's more like somebody sending to your lab, uninvited, an impersonal inspection bot with another company's branding on it, which doesn't only disclose potential issues to you but advertises them across the whole cyberspace.
GitHub isn't your lab. It's Microsoft's lab. (They just rent out space free of charge.)
I stand corrected. (You forgot that they also mine my lab for data subsequently used in paid solutions.)
My point is, we should cherish the culture that enables progress and learning by routinely opening works of passion to free use and contribution. We could cherish it by employing a sense of ethics and adherence to certain protocols of behavior, these don't need to be spelled out but those of us who know better can lead by example. Putting OSS maintainers under undue stress has come under fire before, and this looks like one of those cases.
Is having a publicly reachable email address consent to receiving unsolicited emails? Is having a public postal address consent to receiving mailed advertisements? Legally, yeah; morally, less so.
If you believe that's a material difference, replace "advertisements" with something more like "mailers informing you on how you can improve the [security/looks/whatever] of your building".
I feel the same way about those bots that tell you about insignificant security vulnerabilities in some project you abandoned. It's basically spam.
That said, this does seem like it is a bit more useful. As long as they actually read the changes and make sure they aren't false positives. Which I'm guessing they didn't do for 666 repos.
> As long as they actually read the changes and make sure they aren't false positives. Which I'm guessing they didn't do for 666 repos.
In the article they say that "really a bot found the problem and made the PR, but really a human developer at Code Review Doctor did triage the issue before the PR was raised)".
Dependabot isn’t the only source of vulnerability fatigue, there are plenty of “researchers” who would spam your active projects about pointless “vulnerabilities”. For instance, I recently got one about a parsing issue in gmp from a human user, who probably found it by scanning PyPI. I’m not touching anything adjacent to the supposedly vulnerable codepath, and the fix isn’t even in a gmp release, meaning I would have to carry a patch if I were to “fix” it. I still responded amicably, but I was not happy.
There’s not really anything that can be done about that, yet, unfortunately. But if you’re not committing to the repo anymore, archiving it is an option. It’ll disable the issue tracker and pull request features. And if you change your mind, you can unarchive it.
> > We may be looking too deep into this but it seems like many developers think when string concatenation occurs it’s enough to declare the first string as an f-string and the other strings are turned into f-strings by osmosis. It doesn’t. We’re not suggesting this is the case for all developers that accidentally did this error, but interesting nonetheless.
I highly doubt that people believed that f-strings worked this way. Far more likely is that, for example, the expression started as one line, then got split onto two, or some such similar scenario.
This may be also caused by confusing syntax highlighting in some editors, for example in VSCode [1]
The variable in the second string gets highlighted (with slightly different color, but still) because it would still work with `str.format()`. GitHub doesn't seem to do this.
Indeed, if "{value} is bad" can be automatically f-stringed by an external program automatically --- then why can't Python do this automatically -- so we can get rid of the f-string type as a required explicit declaration? After all, we don't specifically add a type to a number like 42 or 3.14159 --- those are implicitly 'int' and 'float' types.
I would use such a feature, as I always use f-strings when formatting.
> Indeed, if "{value} is bad" can be automatically f-stringed by an external program automatically --- then why can't Python do this automatically -- so we can get rid of the f-string type as a required explicit declaration?
Because it would break existing strings containing braces, such as those used with `str.format`, or string.Template, or literal Jinja templates, ...
Yes, my use case was "I always use F-strings" so these other breakages could not occur, by definition.
I suppose it's not that much of a problem to run a program to preprocess source and add in the F
But.. Python has a history of introducing new features that break old ones. That seems to me a balance between backward compatibility and future goodness.
the "from future import auto-fstring" construct could do it...
And, as for having to run the f-string parser: yes, but only once on static strings, which are most of them.
It's not clear to me that folks would want this behavior globally - I personally would not, because sometimes I want to be able to include curly braces in my strings without it being interpolated, and I prefer the current syntax of having that just work. I'm very comfortable with having to add the 'f' to the string syntax to declare that this particular string should be interpolated.
There are other issues you'd have to deal with as well. Would you interpolate non-literal strings (e.g. in the code `print(input())`, if the user inputs the string "{1+1}", what would be printed?)? How would you propose dealing with jinja and other strings that typically contain curly braces that should not be interpolated? This could also be an issue with (potentially long) strings containing lots of random characters: if a '}' showed up in a string somewhere after a '{', even if separated by tens or hundreds of characters, then you'll run into errors. This could be a problem if you're dealing with pseudorandom or base-92 encoded strings, or even just strings representing code (imagine a python library that generates C++ or Java code which has lots of hard coded strings with braces).
I think overall, having to specify that a particular string should be interpolated is a better solution than having to specify that a string should not be interpolated.
Well, you could put a N" your string with{} in it" to type-specify that you are not F-string. In fact, WHY are strings not fully specified every time? There is nothing wrong with F"This is also an f-string". After all, strings are just number sequences with special meanings assigned to some values in some circumstances.
Seems to me the problem is similar to wanting 1 be interpreted as an int --- if you want float, you change the syntax (and therefore semantics) to 1. or 1.0
It's always possible to conjure corner-case failure modes; but shouldn't the 'common case' be catered to, more than some base-92 encoded strings?
And, by the by, more 'smarts' can be applied to automatic f-string determination. If "{variable-that-exists} foobar" is seen it could plausibly be converted to an f-string.
This leads into a much longer discussion of how our compilers/interpreters are too stupid today, and need to up their game. But probably not here, not now.... and also, thank you for your observations & comments.
That's true, 'normal' strings are the common case -- IF what you are doing requires the use of 'normal strings'. If you are doing formatting for output, then the 'normal' string is an f-string.
As to 'injection' attacks -- I have no sympathy for somebody who takes user input, does not check it, and throws it at SQL. All user input should be checked 7 ways from Sunday before it's even passed to a part of the program that's supposed to deal with said input.
From my perspective, I'm just trying to say "there has got to be a better way". yes, we can survive, but -- as the OP shows -- a bunch of auto-fixed bugs went away. Why were the bugs there in the first place?
I have a saying that programmers spend too much time debugging; and if they would just not put the bugs in, in the first place, debugging could be reduced. To do that, you need help from your language. Other things also matter, sure. But give me constructs that make it hard for me to screw up.
I agree that the language should lend itself to being as bug-free as possible, but you and I disagree on which case is the common, less-likely-to-cause-bugs case. I think it's normal strings, you think it's f-strings. Fair enough.
I do want to point out that only interpolating f-strings containing variables that exist is actually more likely to cause bugs because a string "{foo}" may be fine at one point, but then down the road another programmer happens to declare a global variable foo somewhere completely different, thus causing this string to change. The more important issue, though, is that f-strings contain expressions, not just variables, so you can do things like f'{1+1}' or f'There are {len(os.environ)} env vars defined.'. It's not clear what special cases for expressions inside the braces would magically become f-strings and which would not.
> And, by the by, more 'smarts' can be applied to automatic f-string determination. If "{variable-that-exists} foobar" is seen it could plausibly be converted to an f-string.
One obvious and dangerous application of these "smarts" is when people expect curly braces to be treated as string literals. What if a string contained an example f-string that contained {sensitive-server-information}? It's reasonable to expect that code like that wouldn't later become vulnerable to injection attacks.
> Yes, my use case was "I always use F-strings" so these other breakages could not occur, by definition.
Unless you're using any dependency at all, including the standard library.
> But.. Python has a history of introducing new features that break old ones.
It doesn't tho.
> That seems to me a balance between backward compatibility and future goodness.
That assumes "everything is an fstring" is considered "future goodness", which I'm not sure is a widely shared view.
> the "from future import auto-fstring" construct could do it...
That seems unlikely as the __future__ pseudo-package has generally been used to opt into hopefully future behaviour.
Given the Python 3 experience, somehow, I don't see "let's make all string literals into fstrings" happen any time soon, but hey feel free to create a PEP proposing that.
Alternatively, create your own import hook which does this swap before handing the module off of to the compiler.
If all strings with what looks like format specs are implicitly f-strings, how do you use reusable template strings, which use the same basic internal syntax, for formatting?
f-strings, like r-strings, have uses, but, like r-strings, I wouldn't want to replace plain strings with them.
Because in some cases it is not desired, if for example the string is used in something that expands it further down the line (think SQL injection potential).
Backwards compatibility for one. Code existed before fstrings that may use curly braces, and you can currently use curly braces in non fstrings without escaping them.
Might also be a performance penalty for always having to run the fstring parser.
As a beginner, maybe. But this is code in some of the biggest open-source Python repos in existence. Probably written by someone with a reasonable level of Python expertise.
There's an experimental option for splitting long strings using the --preview flag, but it still seems to have some potential issues. That seems like it would be extremely hard to solve correctly.
I don't mind it's a bot, and I really appreciate that apparently a human made the final review before sending the PR. But I don't like that the tittle of the commit is:
I expect a more neutral title for a commit, something like
> Fix fstring in <name-of-file>
Each maintainer/project has their own (weird) rules about titles, and if any other files must log the changes, and regression test, and whatever they like. But I think no maintainer/project expect to see the name of the author in the commit tile.
The article links to some docs for the logging module here: https://docs.python.org/3/howto/logging.html#optimization asserting that f-strings are less optimal but the docs do not say that they do not optimize our the expression evaluation of f-strings: only that the logging module tried to perform evaluation as late as possible: where is the f-string described as suboptimal?
Relatedly the logging optimization suggests setting: raiseExceptions to false for production logging code: where is that set? On the logger, handler or something else?
I was also confused by the expression evaluation thing. Reading between the lines, it seems like
logger.debug("hello %s", foo)
may be better than
logger.debug(f"hello {foo}")
in the case when loglevel is higher than debug. In the first version, the final string does not have to be computed, while in the second version, we might construct the string and then do nothing since the loglevel is excluded.
The first is better also because you can do things with loggers other than print out their contents. For example, suppose you had a statement like:
logger.debug('Database error: %s', error_message)
You probably have a logging handler that did the normal string. But you can also have one that keeps a count of how many `Database error: %s` hits there are (as opposed to `Network error: %s`) there are over time. Doing the string substitution would break this aggregation.
Although this becomes more complicated because printf-style string formatting is not free (though it's the cheapest of all methods save fstrings if I remember correctly), and because python does not support lazy parameters if `foo` is a non-trivial expression odds are good it will far outcost either formatting.
I guess it's implicit in that f-strings, as arguments, will be evaluated before the logging function can even run whereas `debug("...", heavy_obj)` will avoid a potentially expensive `str(heavy_obj)` (or whatever the string conversion warrants.)
As for raiseExceptions, I'm not sure it's for optimization. It looks like an old sanity check for bad logging configurations.
The "Use logging's interpolation" warning has always annoyed me. If logging is in your hot path, that might be an issue. Me f-string interpolating some info logs that run once for convenience is not.
I find it ironic that the article points out that relying on error from humans to find errors is something of a hit or miss proposition and suggests that automating error finding is an appropriate course instead of making it less likely to make the error in the first place.
For example, I wonder how many errors would have been found if the definition of a format string was the default? That is, how many times would people have written something like "hello {previously-defined-variable}" and not meant to substitute the value of that previously defined variable at runtime?
I don't think this makes sense. Plain strings and format strings are not interchangeable, and using one where the other was meant is probably a bug.
Would you expect that a user input like "{secret} please" is interpolated? If so, we hopefully agree that this would blow major security holes into any python script processing untrusted user input. And if not... Why not?
Look up how this works in Swift. They only have one string. No raw strings or f strings. Yet they have all the power of all three python string types and less syntax. It's very nice.
But they’re a distinct string syntax. Your point seemed to be that there was only one. rf"{expression}" works in Python too, note, so either way you want to interpret it, raw strings aren’t a difference.
If you only make it work with string literals (e.g. generate the underlying formatting logic at parse time), it wouldn't allow arbitrary inputs to be treated as f strings.
The assumption I'm thinking they mean is to make formatting default and unformatted not default, for example, how "raw" strings were treated, escaped characters are replaced with the ascii code by default unless the string is raw, signified by an 'r' prefixed in front.
As someone who would like to be working on new, interesting things in 2-3 years rather than bringing old code into conformance with breaking changes, this attitude captures a worrisome trend in development.
On the one hand, it's great that we have platforms that innovate and improve and harden over time, but we're also facing a development culture where more and more time is spent servicing package/platform/language/OS changes that have no material impact on our own otherwise-mature projects.
It's worth being judicious about where breaking changes are applied, right?
We’re not talking about deprecating a feature here, we’re talking about the addition of behaviour that will break existing code, potentially in non-trivial and hard to debug ways, and in ways that could easily introduce security vulnerabilities.
We've done this too many times and we've had enough pain, let's please proceed at a pace where we can worry about delivering our product and not updating formatted strings, thank you.
Did you think this through? What would you treat as a fatal error? How would the compiler know if a particular string is old style code wanting to print some characters between curly braces or new style code wanting to string interpolate a variable?
Right?! Imagine: "We're announcing Python 4. Python 3 was because we handled unicode in a way that turned out to make nearly 0 sense. Python 4 is because you lot can't put your f's in the right place."
Changing something as deeply rooted as the string type?
Python already went through exactly that disaster once before, when they changed the default string type from b””-strings to u””-strings. It took about 20 years for this transition to finally complete.
PHP has also been responsible for the majority of exploited servers and misconfigured applications. Whatever they are doing it, I take it as a strong negative signal.
That's not unreasonable considering that PHP is by far the most popular server-side language. It's not like we have many hackers targeting Erlang instead.
To be fair, your suggestion might make for a more resilient default, but it's also a great way to leak data and add overhead for the default case. There are tradeoffs.
Not much overhead, I would think. We’re talking about literal strings in source code, not strings in general. It’s not much work to check those.
One thing that it would break is that strings read from files would be treated differently from those in source code, even those read from files that logically “belong” to the application (say config file)
I don’t think that’s an issue, though.
Also, in Swift "\(foo)" does string interpolation. I haven’t seen people complain it leaks data or makes Swift slow (but then, it’s not fast at compiling at all because of its rather complicated type inference)
> Also, in Swift "\(foo)" does string interpolation. I haven’t seen people complain it leaks data or makes Swift slow (but then, it’s not fast at compiling at all because of its rather complicated type inference)
I think that the claim is not that this leaks data in an absolute sense, but rather that changing the behaviour after people have come to rely on it will leak data from currently well behaving applications.
>...and suggests that automating error finding is an appropriate course instead of making it less likely to make the error in the first place.
You can't fix the syntax and standard lib of the language. It is what it is. Similarly, how many bugs would you prevent if Python had compiler support to catch those types of syntax (and type) errors.
This is how bash works. Any string with a $ in it will be interpolated unless you double escape it. Also depending on if you use double or single quoted strings.
Spaces as list separator could also fall into this philosophical question of what makes most sense as string separators. Some times it is super convenient, until you have actual spaces in your string and it becomes a pita.
See also the yaml Norway problem for what happens when implicit goes over explicit.
It generates about the same amount of bugs, if not more, and would also end up with a code-review-doctor suggesting you to use /$ over $. In the end, regardless of syntax, a human always have to make the final call on whether interpolation is wanted or not.
I was actually surprised to read that people would ignore or be annoyed by a bot raising a valid PR that can be easily merged after a quick glance. What would be the reason for that?
Automated checking of potential bugs in f-strings is hard. There are lots of false positives. You can see some discussion around this kind of rule in pylint [0]. At the end of the day, the choice to run automated linting tools on a repo is up to the maintainers. Autogenerating PRs like this is incredibly noisy and comes off to me as a blatant advertisement for their "code review doctor" product.
The article specifically mentions that they were not auto-generated,
"It was also interesting to see the reaction from open source developers to unsolicited pull requests from what looks like a bot (really a bot found the problem and made the PR, but really a human developer at Code Review Doctor did triage the issue before the PR was raised)"
In reactions they conveniently left out "false positives we still hadn't weeded out". On top of that it can be annoying to have bots making trivial PRs in their own format when you've got a well defined process for it. Lastly it was basically spamming an ad link for the service at the end of the PR comment - even if the other issues didn't come up it's not always well received to do that.
Looking at 1 bot it doesn't sound bad, when you have everyones bot doing this kind of stuff it can quickly become more of a nuisance than a help.
You're assuming that the PR is valid, but a maintainer can't make that assumption. They have to do the thankless work to figure out the context and handle the fallout if they get it wrong. Let's look at who wins:
* Small benefit to bot creator
* Tiny benefit to project
* Modest cost to maintainer
Waves of low-effort resume-padding commits are already a thing. Not a big problem, but bots clearly have the potential to multiply the small problem into a big problem.
I'm still open to the idea that bots could be a net win, because most projects really do have heaps of small simple mistakes lying around. I'm sympathetic to the maintainers though. They always seem to get the short end of the stick.
I would never tolerate ads in my commit history. That's ridiculous.
It's basically using open source repos as an advertising platform for their static-analysis bot.
If they want to offer services, they can reach out to the maintainers. This is different than a human opening a valid PR on a OS repo since the commit message includes an ad and now they're advertising on HN.
I expect to see the entire gamut of possible reactions with a sufficient number of bot PRs. But in looking at 10 of them at random, I didn't find a single "negative response."
(I don't think ignoring it is invalid or wrong by any means, given there's so many reasons one might not engage in a timely manner, or at all, in the issues section or PRs. I don't monitor my repos issues because I just don't feel interested in supporting my code. Feel free to fork or ignore!)
I looked through all three. The first isn't really a complaint because the bot acted in good faith and found an error. In the second one they complained abiout a missing unsubscribe link (reasonable) and in the third one, the author should update their code so it doesn't create a variable named path, then a non-f-string that includes "{path}". I had to stare at the author's comment that it was a false positive for quite a bit to convince myself they were right.
I will point out that in the first two issues, the repo owners also edited the initial report with something along the lines of "removed ad".
I disagree that the first isn't a complaint -- the owner stated that this behavior isn't appreciated but decided to let it slide because the issue was valid.
In the third issue, the owner also explicitly stated: "I don't think bots posting unsolicited static analysis results are a good idea." I have no opinions on whether the code should be clearer, but that doesn't change the validity of the reaction.
The bot-account's (apparently human-written) reply of "you're very welcome" to the complaint in the third issue is downright dismissive of the problem and kinda passive aggressive. While it seems that the bot did good work overall, the human(s) handling edge cases need work.
klyrs was right about the reply from me (a dev behdind Code Review Doctor) being dismissive in the issue. I apologise for that.
FWIW my reaction was classic "expectations not meeting reality": weeks of work to do (what I thought) was a mutually beneficial helpful thing. I was naively not expecting non-positive responses and was ill prepared when you raised valid concerns I had not considered.
Again, I am working on that and sorry I was passive aggressive to you.
I do think the question of "how should bots that do static analysis work" is an important one, but in the meantime, people are gonna bot and repo managers are gonna complain.
to me, well-intentioned systems wiht a high true positive rate and low false positive rate are welcome so long as they follow reasonable etiquette and norms, which this group seems to do.
> After creating 69 pull requests the reaction ranged from:
> Annoyance that a bot with no context on their codebase was raising pull requests. A few accepted the bugs were simple enough for a bot to fix and merged the pull request, but a few closed the pull requests and issues without fixing. Fair enough. Open source developers are busy people and are not being paid to interact with what they think it just a bot. We’re not entitled to their limited time.
> Neutral silence. They just merged the PR.
> Gratitude. “thanks” or “good bot”.
I appreciate their self awareness about responses from maintainers.
I like python although I don't use it too often. Would it be unfairly critical of me to say that this is the outcome of a bad design choice? Ideally languages should be designed in a way that a bug like this which is so widespread and easy to create, should be caught via some mechanism, either linting or some part of the process.
The f-strings are a recent (may be not so recent now) addition to the language, so all the errors stem from it being "new" where people's reflexes / carefulness hasn't adjusted to them yet.
I think in addition to the suggestion for linters, updating IDE/editors to incorporate them would help. Syntax highlighting is the primary reason not terminating strings isn't that common of an error anymore, coloring it differently than a normal string might help (or may be it would make things ugly, I don't know).
Compare to Typescript having a very small but significant difference. They use backticks instead of an f-prefix.
I think this minor difference eliminates all confusion of whether concatenating f strings and normal strings propagate. Same when you split an existing f-string in two because it became too long, there is no risk to forget a backtick on the second pet, in the same way you would with f-prefix, because if you do the closing tick doesn’t match the opening.
Linting the existing f-strings is, as shown by this bot, unfortunately very difficult.
To play devils advocate, that's a matter of perspective. There's nothing special about "{text}". It's just characters. One could claim that the default analysis should be "that's fine", with the option to make it special available with the "f". ;)
But, perhaps you're right, and the total number of bugs would be reduced with f-strings, but that would require making them default back in python 1.0.
The linter I use has warnings for things-that-look-like-f-strings on by default. But, some of my projects have f-string like text, so special text to tell the linter to ignore them are required all over the place.
Most developers will require an arsenal of static analysis tools to achieve maximum productivity. Linters are an example of such a tool, but they don't exist as part of the language spec itself, AFAIK.
Maybe catenation of an fString and a string should yield an fString by type promotion? String is morally "any" so it feels to me like a contextual narrowing of type.
- I'm happy to receive fix-a-typo PRs from human users. In this case the other side demonstrated that they care by putting in a bit of manual effort, and a small PR often paves the way towards larger contributions. I also know that open source beginners get really excited about their first small contributions, and I'm honestly happy to support that.
- In contrast, the marginal effort for bot PRs is ~0. It's very easy to generate a small amount of work for a lot of people, and the nice side effect is that the bot's platform is advertised everywhere. As a maintainer, I have never given consent to this and I have no choice to opt out.
We are very happy users of some GitHub bots, but I feel it needs to be an active adoption decision by the maintainer. If you want to pitch me your service you may send me an unsolicited email, but don't use our public space to advertise your product without asking.
Edit: I don't want to be too harsh to OP here - at least they pointed out a small but valid issue in our case. I very much appreciate their apology at https://news.ycombinator.com/item?id=31210245