Years ago when I had to switch from Linux to Mac for work, the different keyboard shortcuts for simple things like copy, paste, word left, move selection with cursor, etc. killed me. There are ways to change some of them, but Electron apps and various websites have hard coded emulations of the defaults.
My coworkers simply refused to empathize with my frustration, but it's seriously like having a malfunctioning limb when a neurally deeply learned technology starts failing you.
Imagine a nightmare where when you try to open your hand and it closes, you try to move your arm outward and it smacks you in the face, you try to look down and your eyes instead flail randomly, and your legs are moving in slow motion while the monster is hunting you down.
Now imagine coming from years of Mac use to Linux or Windows, and discovering to your horror that because the modifier key on those platforms isn't cmd but ctrl you cannot do many of the default shortcuts you're used to in the terminal.
Yep, Apple really nailed it with that choice. Not only can you use GUI key combos in the terminal, but also you can use terminal line editing key combos in the GUI. That's something that seems like it ought to be possible in Linux, but afaik it is not.
One thing one my wish list of things that will probably never happen is an extensible EMACs, Jupyter hybrid with a full on highly extensible dependently typed programming language. By keeping the specification of the components as abstract as possible, one could reinterpret the same code to give a completely different and customizable user experience. Probably with a lens based GUI toolkit for interactive widgets.
I have no idea if it is possible, and maybe I should be the change I want to see in the world.
EDIT: I realize now this doesn't quite follow the conversation, but I wanted to get the thought out somewhere.
Can you expand some of these points? I fail to understand (I need an ELI5, first principles)
I think I get the premise, EMACs/Jupyter hybrid with ext. prog. lang. (+1 for typed).
> By keeping the specification of the components as abstract as possible, one could reinterpret the same code to give a completely different and customizable user experience.
Custom UX is my pet peeve, has been forever since the 1990s. I have intuitions but no clear understanding of what you mean here. Does that essentially mean decoupling model/view, a more modular or stateless metaphor?
> Probably with a lens based GUI toolkit for interactive widgets.
lens? (impossibly hard for me to Google what that generic word refers to, I just tried)
————
I'd like to 'get' to the essence of the paradigm you're suggesting. What the 'magic' is, its endgame, what it 'feels' or looks like.
As for the "abstract specification", I interpret it as decoupling the view from the model too, but maybe expressing how you can represent the model? Like "option" being representable by a dropdown or radius buttons. I don't know, I agree it's vague.
So there are sort of several ideas floating in my head. Keep in mind, these are very loose and I am talking way out of my league here.
I said lenses, but the more general term is optics (still vague and unsearchable, I know). A certain representation of optics was popularized in Haskell by Edward Kmett partly as a way to solve the record access problem, but they are way more useful then that.
Lenses, in a context of programming, can sort of be seen as a vast purely functional generalization of l-values.
When you are manipulating affordances in a user interface, you can view it as manipulating part of larger state. That is something that fits very well into the framework of optics. A simple example would be a color picker using different color models. HSV and RGB are both views of the same object. A slider for each of hue, saturation, value, red, blue and green can be seen as a lens from color to a magnitude. By adjusting the slider, you change the state.
Some parts of a user interface are dynamical systems, i.e. evolve over time based on state and locally available actions. One can imagine a platformer game with changing player position, or a media player with a slider. FRP delves into this, but I'm not too familiar with the literature. I am vaguely aware that lenses can be connected to dynamical systems, but I don't understand it at all.
As far as reinterpreting code. In Haskell and typed functional programming in general, there is this tower of different abstractions you can use to describe different levels of reified computation (Functors, Applicatives, Arrows, Monads). You also get effects systems, which make it more tractable for the layman that doesn't want to read ω+1 monad tutorials.
In these abstractions, one can make "free" constructions, which sort of have the flavor of making the thing as syntactic as possible while still following the laws that must be upheld by the abstraction. This makes it possible to write code separately from the way it is interpreted.
I only have a vague feeling that these concepts will fit well together. Perhaps my head is too far in the clouds.
EDIT:
As far as model view separation, that it is. But the idea is to stratify it even further. You have your larger state of the whole application in which you can increasingly select parts of the whole and manipulate them.
The lens part not only captures the translation from the model to the view, but also from actions on the view to the model.
Replying because its too late to edit
EDIT: I'm struggling to find the source connecting lenses to dynamical systems. I might have misremembered, and was pretty tired, so take that with a grain of salt.
That's a lot to take in, let me get back to you a bit later.
Extremely interesting, I must say at first glance. You're definitely technically stronger than me in most of these topics, I'll have to work on it a bit to clarify some of these concepts.
Thanks for an insightful and candid thinking out loud. Much appreciated. I wish other people would chime in as well to further the discussion, I'm afraid I won't be able to elaborate much technically but rather speak high-level UX concepts. What's interesting is that your general approach seems to answer an essential problem (and opportunity!) in systems whose complexity far exceeds human cognition (that's what I get from the 'lens', not what's on Google about it but the object in your head you call as such; it is sometimes hard to explain such things in a word, rather takes a book chapter).
How would Emacs with a typed programming language work with function redefinition?
One of the benefits of the environment is I can change code very easily and see the changes quickly. But with types I'm not sure how you'd change the signature of a function and get it to recompile if it's already used elsewhere.
That's one thing I have been thinking about. I think Haskell and other statically typed functional programming languages would benefit from a more interactive REPL like experience for "non-pure" programming where you are sort of changing the wheels on a moving car.
This has a very reactive flavor, so I think you could have a sort of functional reactive approach where your hot-swappable definitions are represented as behaviours over time.
How to make that get out of your way until you actually want to swap things out, I have no clue. I think it will involve some sort of effect system, but I'm not sure if that will solve the problem.
I realize now that you were talking about changing types, which I didn't factor in. I guess what would happen is you would cascade the hot swapping, reloading anything which depends on the function for which the type has changed.
I can't speak to Windows but at least on Linux that's entirely determined by your desktop environment and terminal emulator. For example, KDE offers near complete configurability of keyboard shortcuts including the ability to specify different sets of active shortcuts based on an application being open or in focus. Taking it even farther, you can use xmodmap (deprecated but simpler) or xkb (full functionality but more complicated) to completely redefine how physical keypresses are translated into software events.
You can do virtually anything on Linux if you're willing to invest the time.
Yeah, I've set it up before but there's always a program or two that doesn't pick up the settings. And you still don't get the ctrl-<letter> line editing tools in text boxes like you do on Mac OS, so I feel it's more trouble than its worth.
Luckily I haven't run into that, but then I'm not using any commercial software currently and the stuff in the official repositories tends to work the way you would expect it to. Generally your settings should work pretty much independently of any program you happen to be running - xkb in particular is processing the key events at quite a low level.
Regarding the line editing tools you mentioned that sounds to me like something built in to the native UI toolkit (I'm just guessing here though). Reconfiguring how physical keys are interpreted can obviously only do so much. I'd never heard of those shortcuts before and they sound like a really useful feature.
Huh, last time I tried it, I think firefox had trouble picking up my settings. I think that was more than five years ago though, so it's likely things are better now. :)
I didn't have that much trouble switching initially.
However, when I have a task that requires continually switching between Mac and Windows machines, it gets annoying. Double that frustration again when I'm using a Windows VM on a Mac.
My coworkers simply refused to empathize with my frustration, but it's seriously like having a malfunctioning limb when a neurally deeply learned technology starts failing you.
Imagine a nightmare where when you try to open your hand and it closes, you try to move your arm outward and it smacks you in the face, you try to look down and your eyes instead flail randomly, and your legs are moving in slow motion while the monster is hunting you down.