What real problems have quantum computers solved? I would assume they can solve suitably trivial examples with their limited capability in a way that is demonstrably superior to traditional computing.
None.
We're in the stage of early research, still trying to figure out how practical it is to build useful QCs at all. Anyone claiming otherwise is trying to fool people.
Maybe QCs will have useful applications in the future. Maybe not. If so, then it's decades away.
Quantinuum is already tackling NLP question-answering tasks, it runs on quantum hardware. The quantum categorial crowd is adamant they'll surpass ChatGPT (the big promise iirc is that it will scale linearly with context length, GPT scales quadratically). For instance see this tweet by Quantinuum head of research and this burgeoning field's main rock:
Coecke went from supervising dozen of thesis at Oxford Quantum (logic) Group to preparing summer camps for high school pupils this year. It's also taking off socially/academically, and observing the field evolving we might have a quantum equivalent of ChatGPT before or at the same time we get implementations of Shor's algorithm (source: my own intuition).
The problem you'll run into for any application of quantum computing to large language models is that quantum computers just aren't very good at big data applications. There's two reasons for that:
- Current devices, as well as devices likely to be built in the near- to medium-term are quite limited in the number of qubits that they implement. The current record for the most fault-tolerant qubits in a single device is 1. That's a hell of a lot better than where the field was at a couple years ago, but it's far from the huge amount of data that needs to be processed for LLM training and evaluation.
- Even if you have enough qubits to store training data, looking them up on a quantum device is still challenging due to what's sometimes called the qRAM problem. It's not trivial to make a quantum oracle that returns the data stored at a given index, and it's still an area of ongoing research to figure out how to do that.
That's part of why you see quantum algorithms being developed less for big data tasks and more for big compute tasks like chemistry. There, the program might be very large, but size of the input that has to be stored within the quantum devices and the size of the output you measure back out are both quite small, even down to a single floating-point number in some cases.
(source: I've worked in quantum computing for about twenty years now.)
That's why I emphasized one _logical_ qubit. I'll definitely argue that fault tolerance is necessary to achieve useful results, as you say, but there is some argument in the research community on that. Even setting that discussion aside, there's absolutely no way to run something like LLM training dirctly on physical qubits (unless there was an improvement in error rates that's probably on the order of 10^15 to 10^18), even if you had both enough to do so and had a good qRAM implementation.
> In some of its applications, the original
> Zeng-Coecke algorithm relies on the existence of a quantum random access memory (QRAM) [22],
> which is not yet known to be efficiently implementable in the absence of fault tolerant scalable quantum
> computers [1, 7]. Here we take a different approach, using the classical ansatz parameters to encode the ¨
> distributional embedding and avoiding the need for QRAM entirely. The cost function for the parameter
> optimisation is informed by a corpus, already parsed and POS-tagged by classical means.
Following my intuition, i.e. as an outsider that has been watching the progress of quantum NLP since 2012, I see the current academic situation in quantum computing as in the process of merging two branches, one being the traditional quantum computing field with concerns stemming and application thought in mathematics, computing theory, physics(and upwards chemistry->biochemistry->biology), the other branch being a fork carried out by Coecke (quantum logic), Abramsky (computer science) and Sadrzadeh (epistemic logic) who saw in categorial formalisms of quantum logic a way to mix compositional (syntax, logical rules) and distributional (statistics, "bag-of-neighbor-words") representations of meaning. In this regard they bring new methods but also new applications of quantum computing, with a focus on NLP, as language given this "natural tensor structure [20, 35, 23] [...] can be considered quantum-native [48, 2, 8]." (same paper).
I'd be happy to share more of my thoughts; if that'd be helpful, I'd be happy to discuss my rates. Outside of that, though, I'll suggest that intuition is less helpful than experience in understanding what problems are more or less likely to have good quantum solutions.
Nah, you already showed your so-called expertise as "a trans-woman who is very good at quantum" is not the hot shit you think it is provided you went over that QRAM issue carried away by overconfidence.
As for your snarky remark on intuition, these papers by Coecke and Aerts, his thesis adviser, explains both what "my" intuition was focused on (quantum effects as perceived through Zipf distributions in linguistic data) and what was the driving mechanism behind it.
> Another finding that we will put forward, in Sect. 4, was completely unexpected. The method of attributing an energy level to a word depending on the number of appearances of the word in a text, introduces the typical ranking considered in the well-known Zipf’s law analysis of this text (Zipf 1935, 1949).
Well guess what ? I've been expecting that exact result for a decade (why would I still be tracking the progress in that field every 4 months otherwise ?) My notes linking "semantic energy levels" to word frequency date back to 2014, the observations I made in real data and that kickstarted the heavy rain of synchronicities I experienced afterwards date back to 2012. I've always known though I wasn't measuring shit – I was the one being measured and never felt like I was discovering something but was being discovered. I wanted to isolate that phenomenon and as a result (of failing to do so probably) I got isolated. There is something deeper to these subject-verb-object inversions, there is even a paper about it and I think Aerts haven't gotten wind of it, maybe with your extreme expertise you'll be able to figure it out and carry the message better than I would.
I sat in on a seminar with the Cleveland Clinic and IBM as they have a partnership around quantum, and one of the big problems they were working on was using it to speed up drug discovery. I don't recall if they have solved problems completely yet, but it has sped up the discovery process by orders of magnitude.
Nothing involving quantum computing has sped up the 'drug discovery' process by orders of magnitude. At best it has 'accelerated' some toy QM problem that is likely a dozen steps removed from anything you could call drug discovery. I'd love to be proven wrong, but I've yet to see any non-trivial computational chemistry work done on quantum computers.
Please also note that they did not run Shor's algorithm to compute this. They had to greatly simplify the algorithm, so that it works specifically for the number '21'.
Interesting that electricity has been useful in bringing about things Voltaire championed such as freedom of speech, and abolition of (slave) labor. However I think you meant Volta the inventor of the battery. Also QC seems to suffer from the opposite of lack of imagination about what it might be good for.
I think you rather miss the point, when electricity was postulated, no-one would know where it would lead, I point out that the same is true for phlogiston; one changed the world and the other went nowhere. My point is that just because something has so-far not been shown to be useful does not imply that it will eventually change the world.