In their present state, yes. These things are basically sophisticated physics experiments with a cloud API. This is fine for research, education and training, but currently insufficient for problems of economic value.
In principle, the advantages of quantum computers are delivered not in speed or throughput, but in algorithmic scaling. For example, the well-known Grover's search algorithm scales as O(sqrt(N)), as opposed to O(n) for classical linear search. Similarly, Shor's factoring algorithm comes with an exponential scaling speedup, taking a nominally O(2^n) problem and bringing it to O(n^k). Problems related to physics and chemistry simulations are falling in various places along that quadratic-to-exponential speedup spectrum as well.
In practice, there's expected to be a larger constant overhead than with classical computing hardware, since error correction would need to be performed continuously during operation, and the physical operations of addressing qubits are inherently slower than classical CPU or GPU operations. There have been a few publications [1,2] asserting that this overhead essentially negates any quadratic scaling benefits, and that only cubic or better speedups would deliver any practical advantage. This assumes no further improvement in operation times or error correction overhead.
As far as timeframe, it's anyone's guess. Error correction will be needed to get beyond these research prototypes, and I think we'll have a better sense over the next 2-5 years as to how hard that will be to engineer.