Hacker Newsnew | past | comments | ask | show | jobs | submit | more Eduard's commentslogin

I started with "sudo apt install python" a long time ago and this installed python2. This was during the decades-long transition from python2 to python3, so half the programs didn't work so I installed python3 via "sudo apt install python3". Of course now I had to switch between python2 and python3 depending on the program I wanted to run, that's why Debian/Ubuntu had "sudo update-alternatives --config python" for managing the symlink for "python" to either python2 or python3. But shortly after that, python3-based applications also didn't want to start with python3, because apt installed python3.4, but Python developers want to use the latest new features offered by python3.5 . Luckily, Debian/Ubuntu provided python3.5 in their backports/updates repositories. So for a couple of weeks things sort of worked, but then python3.7 was released, which definitely was too fresh for being offered in the OS distribution repositories, but thanks to the deadsnakes PPA, I could obtain a fourth-party build by fiddling with some PPA commands or adding some entries of debatable provenance to /etc/apt/lists.conf. So now I could get python3.7 via "sudo apt install python3.7". All went well again. Until some time later when I updated Home Assistant to its latest monthly release, which broke my installation, because the Home Assistant devs love the latest python3.8 features. And because python3.8 wasn't provided anymore in the deadsnakes PPA for my Ubuntu version, I had to look for a new alternative. Building python from source never worked, but thank heavens there is this new thing called pyenv (cf. pyenv), and with some luck as well as spending a weekend for understanding the differences between pyenv, pyvenv, venv, virtualenv (a.k.a. python-virtualenv), and pyenv-virtualenv, Home Assistant started up again.

This wall of text is an abridged excursion of my installing-python-on-Linux experience.

There is also my installing-python-on-Windows experience, which includes: official installer (exe or msi?) from python.org; some Windows-provided system application python, installable by setting a checkbox in Windows's system properties; NuGet, winget, Microsoft Store Python; WSL, WSL2; anaconda, conda, miniconda; WinPython...


I understand this is meant as caricature, but for doing local development tools like mise or asdf are really something I've never looked back from. For containers it's either versioned Docker image or compile yourself.


The problem for me: a non-python developer, is that I just don't know what to do, ever, to run an existing script or program.

It seems every project out there uses a different package manager, a different version of python, a different config file to set all of that up.

Most of the time, I just have a random .py file somewhere. Sometimes it's a full project that I can look at and find out what package manager it's using. Sometimes it has instructions, most of the time not. _That's_ the situation I struggle with.

Do I just run ./script.py? python script.py? python3 script.py? python3.12 script.py? When inevitably I miss some dependencies, do I just pip install? python pip install? pipx install?

As a developer I'm sure that you just set it up and forget about it. And once things work, they probably keep working for you. But man, it really reflects negatively upon Python itself for me. I don't hate the language, but I sure hate the experience.


I believe what is missing is a way of distributing apps. You face similar issues if you get the C++ source of a random program - there are quite a few build systems in use! However, the compiled program can often just be zipped and shipped, somehow.


The C/C++ ecosystem is a bit more sane, but requires more expertise to fix. As long as you figure out the build process, usually you can rely on the distro packages. For Node and Rust, people really like to use the latest version and not the LTS one for their software.


I'm not in the market of selling python programs, but pyinstaller --onefile exists. It's imperfect, but I'm surprised it hasn't seen more uptake.


Uv solves this (with some new standards). ./script.py will now install the python version, create a venv, and install dependencies (very quickly) if they don’t exist already.

#!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.12" # dependencies = [ # "ffmpeg-normalize", # ] # ///


i think mise can support uv too as its backend https://mise.jdx.dev/mise-cookbook/python.html#mise-uv

Also I mean I understand mise but I personally just prefer using uv for python and bun for typescript both of which can run any version of python/ (node compliant?)

I still like the project though, but I tried to install elixir using it and it was a mess man.


Your comment shows the sad state of software quality those days. Rust is the same, move fast and break things. And lately also Mesa started to suffer from the same disease. You basically need, those days, the same build env like the one on the developer's machine or the build will fail.


O was trying to install Stable Diffusion just yesterday. They use Conda, so I installed it and tried to follow the instructions. First, the yaml file they provided was not valid . Following the commands to install packages explicitly failed because my Rust toolchain was old, so I updated it just for some other Rust dependency to fail to build, it didn’t even compile . Such a shit show.


Bad software quality is when you update your software frequently.

Instead, we should always choose the common denominator of the most obsolete software platform imaginable. If there is an OS that has not been maintained for several decades, then that is the baseline we should strive to support.

Using an operating system with old libraries and language runtimes is not a personal preference with the consequences of restricting oneself to older software versions, no, it is a commandment and must not be questioned.


Please no, I have to deal with old (but still supported) RHEL versions, this is definitely not the way to go.

You have to use ancient C++ standard versions, deal with bugs in libraries that have been fixed years ago, lose out on all kinds of useful improvements or you end up with retrofitting a modern toolchain on an old system (but you still have to deal with an old glibc).

It’s hell. Just make the tooling/QA good enough so that everyone can run on the latest stable OS not too long after it’s released.


I think I have a similar experience in some ways, but building python from source should work on linux in my experience. On a debian ish system I’d expect apt installing build essentials and the libraries you need and you should be good. I’ve done it with some pain on red hat-ish distros, which have tended to ship with python versions older than I’ve experience with. (I guess it’s better these days..?)


I started at about the same time you did, and I've never seen an instance of software expecting a Python version newer than what is in Debian stable. It happens all the time for Nodejs, Go, or Rust though.



Walmart Rupert Grint


all the weird proprietary Canonical stuff they try to put into vanilla Debian and have it replace common stuff.

snap, lxd (not lxc!), mir, upstart, ufw.

It's neverending, and it's always failing.


LXD was forked as Incus, and it’s an absolute delight.

Seamless LXC and virtual machine management with clustering, a clean API, YAML templates and a built-in load balancer, it's like Kubernetes for stateful workloads.


Incus is fantastic. I think Proxmox is where everyone is migrating to after the VMWare/Broadcom fiasco, but people should seriously consider Incus as well.


how to change parameters? the demo is constantly rerunning.


That shouldn't happen? Normally, the button is grayed out while optimization is running, but after it converges or reaches the max number of steps, you can change the parameters and restart. You may want to lower the max number of steps when trying things out. Sorry if it's a bit clunky!


I'm seeing the same thing: after it converges, there is a pause of a second or two, and then it starts again, making it impossible to set the number of parameters. On Chrome, mobile.


> Paralleling 18650's is relatively easy. You need to match voltage to within a few mV and make sure the connection is really solid (welded) to ensure they stay paired perfectly.

These requirements are already not easy, and there are still plenty of things to consider for using LiPos in parallel (e.g. identical health, preferably batteries from same batch, to increase chance they age identically)


OP exaggerates the difficulty. You can trivially parallel the vast majority of lithium batteries so long as their voltage is reasonably close (I personally wouldn't fuss much over a 100mv difference, or even more in most cases, unless it's a massive battery or a power cell capable of delivering and accepting very high currents - charging most cells will often involve raising the voltage 200-300mv during the constant current phase, so you can safely parallel with a difference like that)

You can match up pretty different batteries in parallel as well. One will take more load etc, but this is not usually a problem. It's not ideal, but I think people often exaggerate the dangers.

Series is much more problematic, since most balancing circuits have very limited capacity to balance mismatched batteries.


> These requirements are already not easy,

Get them reasonably close, then leave then connected by a moderate value resistor for hours. Then you're within mV.

If you're just concerned with getting 95% capacity from each battery, "close" is good enough for the rest.


citation needed. From mining over production to recycling, NiMH batteries are ecologically inferior to alkaline batteries. A breakeven and superior performance may only set in after many recharge uses which NiMHs may never reach (ageing, rare usage)


It's a bit odd to declare "citation needed" and then claim things like "rare usage" which just so happen to suit your argument, while ignoring things like, say, the fact that NiMH batteries mean batteries are only shipped to the end-user once.

I use NiMH batteries in all my thermostats, two scales, etc. Bought them 10 years ago or so. The thermostats get charged every few months and the scales every few weeks.

I think ~12 or so NiMH batteries have replaced, by this point, by rough back-of-envelope-math, thousands of of alkaline batteries.

Did it occur to you that probably one of the most energy-intensive parts of a AA battery's life is its transportation from factory to user? Which NiMH batteries only have...once? And most of that transportation is powered by non-renewable fuels, etc.


A quick check on GPT suggests that shipping the thing via ocean freight is going to be comfortably less than 1% of the carbon emissions of manufacturing. Batteries are really tiny, so they ship well, and they are really complex, so they are more difficult to manufacture than say a simple plastic toy.

I'm glad you are making use of your li-ion batteries, I'd love to see aggregate data on that. I know in my own personal life, rechargeable AA batteries usually get lost or forgotten before their third recharge for me. Climate wise, I'm probably net negative overall on my rechargeables.

But it's also kinda not the right thing to focus on for climate. Driving 50 miles in a gas car will cause a greater climate delta than manufacturing a battery. Eating 12 ounces of beef (300g) causes more emissions than manufacturing and shipping a battery. One international flight can be equivalent to several hundred batteries, etc


it helps if you give us links and model numbers


Well, there’s only one real part (the cells are interchangeable, and reclaimed) https://a.aliexpress.com/_mKNNnGZ I’ve gone with the purple C and U model.


thank you!


It took me two decades to finally decide to memorize "... | awk '{print $1}'" as a command pipeline idiom for filtering stdout for the first column (... "awk '{print $2}'" for the second column, and so on).

All it required from me was to intentionally and manually type it down by hand instead of copy-pasting, on two purposeful occasions (within two minutes). Since then it's successfully saved in my brain.


I'd like to give a shoutout to jq for (non-JSON) text processing, and also as an almost-replacement for awk:

  echo "foo bar" | jq -Rr 'split(" ")[0]' # prints foo
I say 'almost', because jq right now can't invoke sub processes from within, unlike awk and bash. But otherwise, it's a fully fledged, functional, language.


Same! and jq's regexp functions are quite powerful for transforming text into something more structured:

  $ echo -e "10:20: hello\n20:39: world" | jq -cR 'capture("^(?<timestamp>.*): (?<message>.*)$")'
  {"timestamp":"10:20","message":"hello"}
  {"timestamp":"20:39","message":"world"}


  $ echo -e "10:20: hello\n20:39: world" | jq -Rn '[inputs | capture("(?<t>.*): (?<m>.*)$").m] | join(" ")'
  "hello world"
Also using inputs/0 etc in combination with reduce and foreach it's possible to process streams of text that might not end.


`jq -Rr 'split(" ")[0]'`? That would easily take me three decades to memorize ;-).

for selecting the second column, PID, from `ps aux`,

    ps aux | awk '{print $2}'
works for me, as gawk's default $FS field separator treats runs of whitespaces as a single separation of fields ( https://www.gnu.org/software/gawk/manual/html_node/Field-Spl... ), quite similar to "standard" Open Group/POSIX awk ( https://pubs.opengroup.org/onlinepubs/9799919799/utilities/a... ).

    ps aux | jq -Rr 'split(" ")[1]'
on the other hand doesn't work here due to `ps aux` adding a variable number of spaces for formatting.

    ps aux | jq -Rr 'split("\\s+"; null)[1]'
seems to work though.


Yes, in awk, conveniently, whitespace means a run of whitespace. It gets a bit verbose in jq for the same effect; but on the other hand, you get slicing - more slices than `cut` can give you.


These comments here about more or less clever text processing tools, each with their own syntax, feel like archaic hacks. Using something like Nushell or PowerShell makes this trivial.

E.g. for the `ps aux` example, using PowerShell, selecting a certain column:

  gps | select id
The output of gps is an array of objects. Selecting a property does this for all elements in the array.


People do this, but it feels a bit weird to start a whole new language interpreter which is much more powerful than the pitiful shell language we use, just to split a field. Why not write your whole program in awk, then? It's likely more efficient anyway.

The canonical shell tool to split fields is cut. Easy to use, simple to read.

For the very common use case to set variables to fields, use the shell built-in read. It uses the same IFS as the rest of the shell.


Why not `… | cut -f1 -d ” ”` though?


Because awk understands "columns" better.

echo '999 888' | awk '{print $2}'

prints 888


How is that different from

    echo '999 888' | cut -f2 -d " "
except for the default delimiter being space for awk?


   echo '999  888' | cut -f2 -d " " # notice two consecutive spaces
returns NULL.


There are invalid column separators for awk too. To handle consecutive spaces in cut you have squeeze them:

  echo '999  888' | tr -s " " | cut -f2 -d " "


Right, that would be expected though? I suppose awk is better for parsing formatted (padded) output.


How can it be "better" when it is exactly the same?


Speaking personally I find that I always use awk for printing (numbered) fields in shell-scripts, and when typing quick ad-hoc commands at the prompt.

The main reason for this is that it's easy to add a grep-like step there, without invoking another tool. So I can add a range-test, a literal-test, or something more complex than I could do with grep.

In short I can go from:

       awk '{ print $2 }'
To this, easily:

       awk '$1 == "en4:" {print $NF}'


Yet using cut, even with extra piping to tr or grep before, is probably over two times faster than awk, which may or may not matter depending on what you’re doing.


Now imagine what all the LLM copy pasting could do to our skills ..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: