My only problem is it seems to be buggy still. I just tried it on Fedora 42, and I configured the panel to my liking. Now I cannot get the panel to auto hide or dodge windows, no matter what I try. sigh
Haven't you heard? Sales force doesn't hire programmers anymore. AI is all their CEO needs. ;p. Seriously though, this behavior reminds me of Oracle, and is a great reminder that proprietary software can very quickly become a big liability.
Oracle is exactly who sprang to mind. Throughout my history as a software developer, even Microsoft has had a ton of interest in being involved in the community. Yes, they've wanted to extinguish much of it, when it didn't align with their financial goals... but they were always interested in being part of the "software development conversation". Oracle on the other hand has never extended an olive branch. They're quite happy existing on their own proprietary island. A great reminder that, "they don't want ot play in the pool with you, they want to own the whole pool and charge you to swim in it".
I worked in parallel with Sun Microsystems (prior acquisition) IBM and Oracle. All 3 were horrible in that regard. They all offered cheap services, and waited until roots were deep enough; then they would change the billing scheme multiplying fees by up to 25
Sun was always fair to me, so much so that I feel like it's unfair to lump them together.
I used Solaris on Sun Fire machines and the support was unmatched, we got massive tome manuals that described in excruciating detail exactly how the system worked in almost all scenarios and a deep programming reference.
They never upselled us on anything, or changed the price.. until Oracle bought them and jacked the support costs up (though, to be perfectly fair, they ought to jack the support costs of old systems up).
Microsoft was _awesome_ to deal with as a small company and/or an educational institution. They had special programs for startups where you could get basically anything for free, and their business side was a pleasure to deal with.
They very much understood the "Developers! Developers! Developers!" mantra.
It is impossible to know these days. I just get flooded with automated messages in random channels by them wanting to chat with me about whatever place I'm a manager at.
Other than being immutable, I doubt it. Immutable distros tend to rely on flatpaks to dynamically install new packages. Unfortunately the flatpak codebase is largely unmaintained at this point, and nearly impossible to get changes merged in.
Exactly. I also like Clojure’s use of square brackets for vectors and curly braces for maps. It eliminates all the “vector-” and “map-” function calls.
Those are big quality of life improvements. I wish the other lisps would follow suit. I suppose I could just implement them myself with some macros, but having it standard would be sweet.
The Revised Revised Revised Revised Revised Revised Report on the Algorithmic Programming Language Scheme (R6RS) specified that square brackets should be completely interchangeable with round brackets, which allows you to write let bindings or cond clauses like so:
(let ([a (get-some-foo 1)]
[b (get-some-foo 2)])
(cond [(> a b) -1]
[(< a b) 1]
[else 0]))
...but I hate that, I'd much prefer if square brackets were only used for vectors, which is why I have reader macros for square brackets -> vectors and curly brackets -> hash tables in my SBCL run commands.
I think the R6S behavior helps with visual matching, but squanders using square brackets for something more useful (e.g. vectors), which is a shame. Another thing Clojure does is copy Arc in eliminating parentheses around the pairs of forms in let bindings and cond forms, which really aren’t needed. It just expects pairs of forms and the compiler objects if given an odd number. The programmer can use whitespace (notably newlines) to format the code so the pairings are visibly apparent. That reduces a surprising amount of needless parentheses because let binding forms are used all over (less so cond forms).
True, with modern machine-generated mass-operations refactoring is easier than with older tools, but that doesn't mean a given set of brackets is 'useless'.
I wouldn’t go as far as “pretty awful,” but yes, it’s a keystroke more to manipulate two sequential forms instead of one. And yes, there is a slight indentation advantage when the test and the conditional code won’t fit on the same line. It’s easy enough to use “do” when the conditional clause has multiple forms, however. Personally, I’ll take those trade offs for the reduction in clutter.
I think it's a matter of whether you're programming in a mostly applicative way† or in a more imperative way. Especially in the modern age of generational GC, Lisp cons lists support applicative programming with efficient applicative update, but sacrifice efficiency for certain common operations: indexing to a numerical position in a large list, appending to a list, or doing a lookup in a finite map such as an alist. So, in Common Lisp or Scheme, we are often induced to use vectors or hash tables, sacrificing applicative purity for efficiency—thus Perlis's quip about how purely applicative languages are poorly applicable, from https://www.cs.yale.edu/homes/perlis-alan/quotes.html.
In general a sequence of expressions of which only the value of the last is used, like C's comma operator or the "implicit progn" of conventional cond and let bodies, is only useful for imperative programming where the non-last expressions are executed for their side effects.
Clojure's HAMTs can support a wider range of operations efficiently, so Clojure code, in my limited experience, tends to be more purely applicative than code in most other Lisps.
Incidentally, a purely applicative finite map data structure I recently learned about (in December 02023) is the "hash trie" of Chris Wellons and NRK: https://nullprogram.com/blog/2023/09/30/. It is definitely less efficient than a hash table, but, in my tests so far, it's still about 100ns per hash lookup on my MicroPC and 250ns on my cellphone, compared to maybe 50ns or 100ns respectively for an imperative hash table without FP-persistence. It uses about twice as much space. This should make it a usable replacement for hash tables in many applications where either FP-persistence, probabilistically bounded insertion time, or lock-free concurrent access is required.
This "hash trie" is unrelated to Knuth's 01986 "hash trie" https://www.cs.tufts.edu/~nr/cs257/archive/don-knuth/pearls-..., and I think it's a greatly simplified HAMT, but I don't yet understand HAMTs well enough to be sure. Unlike HAMTs, it can also support in-place mutating access (and in fact my performance measurements above were using it).
______
† sometimes called "functional", though that can alternatively refer to programming with higher-order functions
Ultimately, what drives human creativity? I'd say it's at least partially rooted in emotion and desire. Desire to live more comfortably; fear of failure or death; desire for power/influence, etc... AI is void of these things, and thus I believe we will never truly reach AGI.
While I use functors, applicatives and monads all the time in Haskell, I have no idea what half of these symbols mean. Are these specific to category theory?
The problem is, it explains in a language for people that know category theory, rather than people that merely use functors, applicatives and monads in Haskell
Indeed we could say that those programming interfaces don't need a lot of category theory to understand. For example, in Java a functor would be called Mappable (and actually it seems there is such a thing defined in some libs)
This is why I start with something very simple. If I detect the candidate is getting paralyzed with nerves, I'll reassure them and give little hints to help bring them out of it. After successfully completing a simple exercise, they usually feel more confident and relaxed. Only then will I give them a more challenging problem, with the caveat that I'm mostly interested in seeing how they might approach the problem and what things they can tell me about it. If they solve it, great. If they don't, did they show an ability to reason about the problem well, or identify key aspects of the problem that make it challenging? Can they communicate their thoughts as they think about it?