The polysemic nature of Egyptian hieroglyphic signs is hardly the main issue with learning to read ancient Egyptian. If you're a beginner slogging through elementary translation exercises, the determinatives and phonetic complements help a lot. If you've studied the signs, grammar, and vocabulary you actually need to read texts, you've already gained understanding of the context needed to interpret the function of individual signs.
I don't get your quibble about "cursive" not being an appropriate way to describe hieratic. Pretty much every Egyptologist I've heard speak on the matter uses the term "cursive," with Demotic often described as "even more cursive." And I've copied quite a bit of it and it is far faster to write with a nice fountain pen than even "cursive" hieroglyphs. It's not particularly difficult, either. Sure, it's more complex than an abjad or an alphabet, but I don't see what that has to do with anything. The complexity is far more in reading it than writing it. If we're going to talk about difficulty in both reading and writing, Demotic is worse. And let's not even get into Ptolemaic-period hieroglyphs...
I got an erroneous Type II diabetes diagnosis dropped into the note by the AI scribe at my last appointment because my PCP discussed the A1C test he was ordering. Would not recommend. That isn't to say that manually typed notes or speech to text dictated notes are perfect (dot phrases have ended up "documenting" plenty of conversations that never happened), but a false diagnosis of a chronic disease seems like a really bad failure.
> got an erroneous Type II diabetes diagnosis dropped into the note by the AI scribe at my last appointment because my PCP discussed the A1C test he was ordering.
No, you got an inaccurate diagnoses because your doctor didn't do their job. It's the provider's job to check notes, and this would have gotten that provider a visit with their clinical director at my org.
By that line of thinking, even if AI scribes are terrible, you can only blame the doctor because they didn’t check their notes.
In this case as the patient, all you care is there was an inaccurate diagnosis in your notes. If the doctor were typing them up by hand, presumably that would not have happened.
Similarly if Tesla Self Driving cars got into collisions at 3x the rate of non-self driving, would you defend Tesla because all issues are the drivers fault who are supposed to have their hands on the wheels and paying attention?
> By that line of thinking, even if AI scribes are terrible, you can only blame the doctor because they didn’t check their notes.
Same for any profession. If you use bad tools expect bad outcomes. Yes, I work in a company that expects employees to do their work well, and there are consequences to bad performance.
> In this case as the patient, all you care is there was an inaccurate diagnosis in your notes. If the doctor were typing them up by hand, presumably that would not have happened.
Doctors can absolutely mischart by hand. Human error is one of many reasons we moved to electronic charting. We have providers who love the tool and we see benefits from them having it, and some providers don't want it, so they don't have to use it. I've seen people say they feel it's slower, didn't like the output, and one provider simply enjoys charting. They're all good providers too.
> Similarly if Tesla Self Driving cars got into collisions at 3x the rate of non-self driving, would you defend Tesla because all issues are the drivers fault who are supposed to have their hands on the wheels and paying attention?
Bad analogy, since FSD is supposed to work without the human in the loop. This tool EXPLICITLY is to be operated and checked by a human.
That said, I wouldn't defend Tesla at all, however yes I would state that if you are supposed to monitor a "self driving" car and get into an accident, you failed. In that case I'd say it's safer to just manually drive, and many of our providers choose to chart manually.
One of my hobbies is typesetting modern editions of a certain type of rare, obscure old books that were poorly typeset to begin with. Modern OCR—and I’ve tried plenty of tools—is still rather error prone in my application.
> Some days it'll mark legitimate transaction emails from major companies as spam
I get legitimate transactional emails intended for someone else and those senders refuse to stop them because I'm not their customer and only their customer can request account updates. Those get marked as spam.
They chose not to do so. And the courts are no help, because generally speaking, you can't sue the government unless there's a specific law allowing you to do so (sovereign immunity). The police as individuals are generally immune from civil suits unless they violated some clearly established right (qualified immunity).
Perhaps it should be, but the courts have not agreed. See Pena v. Los Angeles for an example of an appellate case that rejected this argument. It found that a "police power" exception to the takings clause applies in such cases.
reply