They used to call the semantic web that OWL is a part of “Web 3.0” which failed to make an impression or was overwritten with the “Web3” moniker for NFT grifts by exceptionally ignorant people.
I learned OWL the hard way, I had been involved with the semantic web for 10+ years on and off and didn’t meet anyone who knew how to do meaningful modeling with OWL until last year, and that even includes famous academics who”ve written books in it.
OWL and RDF interest me immensely, intellectually. I've never been positioned to use either one professionally, but it looks fascinating. Is there a shorter path to successful modeling than the hard way? Is there a good source on this?
If you are willing to eat the up-front cost of coordinating global resource identification— a daunting task make no mistake, you get non-trivial dataset integration almost for free. Imagine if concatenating two ginormous JSON documents describing different aspects of the same entity would amount to a useful merge into a single combined JSON. If you Need this with a big N, RDF has no alternative.
The rise of SSDs has also more or less obviated the need for clustered indexes as a practical performance consideration. For the small price of trebling your storage footprint, commodity RDF triplestores will index _all_ your attributes/columns without a schema (usually red/black or equiv). Will it scan an integer PK over 100b records as fast as postgres? No. Is that use case in your hot path? Also no (most likely).
Edit: as for OWL, just take the plunge into rule based inference directly. From forward chaining inference (if you want performance and decidability guarantees) all the way up to full blown prolog or [miniKanRen](http://minikanren.org/) (if you want it in a library in your runtime of choice)