Hacker Newsnew | past | comments | ask | show | jobs | submit | Pwntheon's commentslogin

Opened the homepage on mobile and it craps out majorly. Horizontal scroll with some white space and weird floating graphics, content hidden by overflow, etc. Is this a joke?


>One thing I'd mention is that graphic editing (photoshop/GIMP/etc) is still stuck in an interface taken from paper. And that when CorelDraw and Inkscape showed a better interface that also uses a few synergizing tools, other software failed to adopt it. But the pressures on graphics software seems to be different.

As someone who has only used these applications briefly, I would love to know more about this.


I think the claim is the an interface that revolves a bunch of different tools being applied to the image (e.g. the Lasso, Pencil, Paint, Eraser, etc. tools on the toolbar) is imitating paper. I don't know what the better alternative is.


It depends upon what you are starting with and what you are trying to accomplish. In times past, what you were doing the image editing on also mattered.

Programs like CorelDraw and Inkscape treat anything you draw as an object. If you want to draw a circle, it stores the parameters of the circle (e.g. position, radius, colour) in memory rather than a rasterized version of it. If you want to create a circle that looks like a sphere, you may draw a second circle inside the first circle to serve as the highlight, then connect the circles using a blend to create the gradient effect. If you don't like the position of the highlight, you move the second circle and the blend will be automatically adjusted. If you don't like the colour, you select each circle and adjust the colours. Of course, you don't have to limit yourself to colours. The circle could contain bitmap to end up with a textured sphere. The second circle doesn't have to be a circle either. It could be a crescent shape. I used to create some amazing looking alien planets using these techniques. You don't have to limit yourself to primitive shapes either. Graphics tablets digitize a number of different parameters for strokes. You could create a stroke with the pencil tool, then modify a parameter of that stroke so it looks like it was created by a calligraphy pen or a paint brush.

That description implies an intrinsic limitation: it's greatest utility is in image creation. You could store modifications to a pre-existing raster image in a similar way, but it is not quite as useful. For detailed images, it is also CPU and memory intensive. That's not a huge issue today, but it was a huge issue when people were initially developing the standard image processing techniques.


Illustrator is functionally identical in this respect, if you're more familiar with the adobe family.


Cogmind is a newer roguelike that uses lots of effects like this. It looks amazing


It also has one of the best user-interfaces of a text-based roguelike ever. And when I say "interface" I don't just mean the graphic design, although that's amazing too.


Also brogue tends to have very fancy ASCII art animation. Cogmind is not strictly ASCII per se.


And also with some mega tags that prevent me from pinch zooming.


Me too. The Elongated kind has had more of a media presence, but I prefer the normal one.


Only in the most obvious of cases. Your cat reluctantly steps on an item? Cursed. Grey stone moved less than X tiles when kicked? Loadstone. Hundreds of these.


What's the context here? This is not the whole book I think?


Originally it was just a short story. The book was published later.


And there was an educational version, I read it in junior high. Read the full version in Highschool (not as part of a class).


Take this with a grain of salt because I'm not super well read on llms, but isn't their entire function built on prediction?

Sounds like a reasonable approach could be to have a separate "channel" which focuses entirely on the concept of "where is this conversation going?" could give a pretty good baseline for when and how to interject.


We don't have a model for "Where the conversation is going," we have a model for "What's the next token" which implicitly models "Where is the conversation going."

The difference is significant here, because direct manipulation the implicit modeling task is required to do the type of planning that I've described.

It's the same reason these LLM are not "agents." It's because you can only manipulate their world model through the interface of tokens.


I believe a more accurate quote is "Everyone lies".


"Humans are bullshit generators" — Alan Kay


Maybe someone wants to start up a new store?


But for what? You still need to submit your app through the Apple review process. If this was true Sideloading I’d understand but the way it is now I don’t see any advantage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: