Hacker Newsnew | past | comments | ask | show | jobs | submit | ant6n's commentslogin

...except the operating system. And the silly notch. And the weird keyboard. And the hard palm-cutting corner. And the reflective screen. and the finger-print-magnet materials. And the small amount of RAM. and the small SSD. And the weight.

Other than that, it's perfect! (On the blance,still better than any other laptop)


You need to do mid-frame tile updates just to show a full bitmap frame. There’s 360 8x8 tiles on the screen, but the tile indices are 8 bit (you can only reference 256 tiles). You can store only 384 tiles in VRAM - a bit more than a full screen. So the mid-screen update is to go from one tile dictionary to the other, so you can access 360 tiles in total.

You can update 1 tile per scan line (during hblank), so 154 tiles per frame (including 10 vblank scanlines). So you need 2.5 frames to replace all tiles.

If you are really smart about updates, you can “race the beam”, basically start updating tiles just as the frame starts rendering, just behind the active scan line. Then you can update maybe 280 tiles before the active scan line of the next frame catches up with you.


> You need to do mid-frame tile updates just to show a full bitmap frame.

Right!

> So the mid-screen update is to go from one tile dictionary to the other

Yes

I guess I'm missing something here, but I remember doing this myself like 15 years ago


Perhaps it was about full-screen-bitmap(hblank tile-address/mapping switching) rather than full-screen-bitmap-animation?

…so you were produced in Neanderthal fat factory?

Soap, soup, goop

Isn’t that what George Lukas said about Star Wars?

What I would like to see is a comparison of how well the models work in long running conversations:

  * do they lie and gaslight

  *  do they start breaking down on very long chats (forget old context, just get dumber)

  * do they constantly try to tell me how smart I am vs solving the problem (yes man)

  * do they follow conventions, parameters set out early in the prompts, or forget them

  * if they cant read a given file (like pdf), do they lie about it

  * is there a branch function to go back to earlier state of conversation

  * what is the quality of the presentation of results (structure, wording, excessive use of tables, appropriate use of headings)

  * how does the bot deal with user frustration (empathy?)
For example Chatpgt 5.5 is fairly smart, but presentation of results is kind of poor and unstructured, and unnecessarily long. It will break down on long conversations (the long answers dont help here), and it can’t deal with that except lying and gaslighting. It also has very little empathy, and mostly ignores user frustration. But at least theres branching, so one can go back without completely starting over.

Gemini doesnt feel quite as smart these days. It does well with very long conversations. Except it has bugs where all context gets lost or pruned, and it will lie and gaslight about it. Theres also no branching, so once context is lost you have to start over. Presentation is decent. Empathy is fairly good, except if users get frustrated, it gets more and more flustered and breaks down.


I think they all support branching if you use a agent like pi.

This sounds like being concerned about the adverse health effects of a steak due to sugar.

Funny how Gemini is theoretically the best -- but in practice all the bugs in the interface mean I don't want to use it anymore. The worst is it forgets context (and lies about it), but it's very unreliable at reading pdfs (and lies about it). There's also no branch, so once the context is lost/polluted, you have to start projects over and build up the context from scratch again.


The sheer number of bugs and lack of meaningful improvements in Google products is a clear counterargument to the AI bull thesis

If AI was so good at coding, why can’t it actually make a usable Gemini/AI Studio app?


I think Google might just be institutionally incapable of making good UX


It’s not just about “good” UX. It’s riddled with bugs.

Most of these tests are one-prompt in nature. I've also noticed issues with the PDF reader in Gemini which was very frustrating, although it is significantly better now than it was even two weeks ago. On the contrary, now GPT-5 seems to be giving me issues.

In my experience, Gemini is the most insightful model for hard problems (particularly math problems that I work on).


You know, with a bit of prompting, you can instruct Gemini to output the state of the conversation into a prompt that you can enter in a new chat and continue where you left off. But now with a fresh context window.


Not if Gemini Lost all context already. Also, it doesn't really work well, a lot of the nuance and information simply gets lost.

I gave up on Gemini 3.1 Pro in VSCode after 2 hours. They fully refunded me.


Yeah if I could use Gemini with pi.dev that would be my choice. But Gemini CLI is just so, so bad.


My impression has been that ChatGPT-5.4 has been getting dumber and more exhausting in the last couple of weeks. Like it makes a lot of obvious mistakes, ignores (parts of) prompts. keeps forgetting important facts or requirement.

Maybe this is a crazy theory, but I sometimes feel like they gimp their existing models before a big release to you'll notice more of a "step".


Definitely feels like it.


> In ancient times, floating point numbers were stored in 32 bits.

I thought in ancient times, floating point numbers used to be 80 bit. They lived in a funky mini stack on the coprocessor (x87). Then one day, somebody came along and standardized those 32 and 64 bit floats we still have today.


I was going to reply that just because intel did something funny doesn't mean that it was the beginning of the story. but it turns out that the release of the 8087 predates the ratification of IEEE floats by 2 years. in addition, the primary numeric designer for the 8087 was apparently Kahan, which means that they were both part of the same design process. of course there were other formats predating both of these


The Intel 8087 design team, with Kahan as their consultant, who was the author of most novel features, based on his experience with the design of the HP scientific calculators, have realized that instead of keeping their new much improved floating-point format as proprietary it would be much better to agree with the entire industry on a common floating-point standard.

So Intel has initiated the discussions for the future IEEE standard with many relevant companies, even before the launch of 8087. AMD was a company convinced immediately by Intel, so AMD was able to introduce a FP accelerator (Am9512) based on the 8087 FP formats, which were later adopted in IEEE 754, also in 1980 and a few months before the launch of Intel 8087. So in 1980 there already were 2 implementations of the future IEEE 754 standard. Am9512 was licensed to Intel and Intel made it using the 8232 part number (it was used in 8080/8085/Z80 systems).

Unlike AMD, the traditional computer companies agreed that a FP standard is needed to solve the mess of many incompatible FP formats, but they thought that the Kahan-Intel proposal would be too expensive for them, so they came with a couple of counter-proposals, based on the tradition of giving priority to implementation costs over usefulness for computer users.

Fortunately the Intel negotiators eventually succeeded to convince the others to adopt the Intel proposal, by explaining how the new features can be implemented at an acceptable cost.

The story of IEEE 754 is one of the rare stories in standardization where it was chosen to do what is best for customers, not what is best for vendors.

Like the use of encryption in communications, the use of the IEEE standard has been under continuous attacks during its history, coming from each new generation of logic designers, who think that they are smarter than their predecessors, and who are lazy to implement properly some features of the standard, despite the fact that older designs have demonstrated that they can in fact be implemented efficiently, but the newbies think that they should take the easy path and implement inefficiently some features of the standard, because supposedly the users will not care about that.


The floating point "standard" was basically codifying multiple different vendor implementations of the same idea. Hence the mess that floating point is not consistent across implementations.


IEEE 754 basically had three major proposals that were considered for standardization. There was the "KCS draft" (Kahan, Coonen, Stone), which was the draft implemented for the x87 coprocessor. There was DEC's counter proposal (aka the PS draft, for Payne and Strecker), and HP's counter proposal (aka, the FW draft for Fraley and Walther). Ultimately, it was the KCS draft that won out and become what we now know as IEEE 754.

One of the striking things, though, is just how radically different KCS was. By the time IEEE 754 forms, there is a basic commonality of how floating-point numbers work. Most systems have a single-precision and double-precision form, and many have an additional extended-precision form. These formats are usually radix-2, with a sign bit, a biased exponent, and an integer mantissa, and several implementations had hit on the implicit integer bit representation. (See http://www.quadibloc.com/comp/cp0201.htm for a tour of several pre-IEEE 754 floating-point formats). What KCS did that was really new was add denormals, and this was very controversial. I also think that support for infinities was introduced with KCS, although there were more precedents for the existence of NaN-like values. I'm also pretty sure that sticky bits as opposed to trapping for exceptions was considered innovative. (See, e.g., https://ethw-images.s3.us-east-va.perf.cloud.ovh.us/ieee/f/f... for a discussion of the differences between the early drafts.)

Now, once IEEE 754 came out, pretty much every subsequent implementation of floating-point has started from the IEEE 754 standard. But it was definitely not a codification of existing behavior when it came out, given the number of innovations that it had!


That is merely medieval times.

In ancient times, floats were all 60 bits and there was no single precision.

See page 3-15 of this https://caltss.computerhistory.org/archive/6400-cdc.pdf


I see their 60-bit float has the same size exponent (11 bits) as today's doubles. Only the mantissa was smaller, 48 bits instead of 52.


That written document is prehistoric.


By definition, a document that is written is historic, not prehistoric.

Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).


80 bits is just in the processor. Thats why you might a little bit different result, depending how you calculated first and maybe stored something in the RAM


Intel 8087, which has introduced in 1980 the 80-bit extended floating point format, could store and load 80-bit numbers, avoiding any alterations caused by conversions to less precise formats.

To be able to use the corresponding 8087 instructions, "long double" has been added to the C language, so to avoid extra roundings one had to use "long double" variables and one had to also be careful so that intermediate values used for the computing of an expression will not be spilled into the memory as "double".

However this became broken in some newer C compilers, where due to the deprecation of the x87 ISA "long double" was made synonymous to "double". Some better C compilers have chosen to implement "long double" as quadruple-precision instead of extended precision, which ensures that no precision is lost, but which may be slow on most computers, where no hardware support for FP128 exists.


x87 always had a choice of 32/64/80-bit user-facing floats. It just operated internally on 80 bits.


You can set x87 to round each operation result to 32-bit or 64-bit.

With this setting in operates internally exactly on those sizes.

Operating internally on 80-bits is just the default setting, because it is the best for naive users, who are otherwise prone to computing erroneous results.

This is the same reason why the C language has made "double" the default precision in constants and intermediate values.

Unless you do graphics or ML/AI, single-precision computations are really only for experts who can analyze the algorithm and guarantee that it is correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: