darkhttpd and things like `python -m SimpleHTTPServer` only support static content, so you can't run a chat server on them.
A lot depends on your context. I haven't benchmarked darkhttpd, but it's probably significantly better than my httpdito (linked above), which can spew out 1.8 gigabits per second and at least 20000 requests per second on my 8-year-old 4-core 2.3-GHz amd64 laptop. An event-driven server like darkhttpd would be a much better basis for adding Comet functionality like chat—that's why, at KnowNow in 02000, we contracted Robert Thau to move our Comet functionality from Apache with Perl CGI (IIRC ≈64 concurrent connections and ≈64 chat messages per second on a 1GHz server with 1GiB of RAM) to his select()-driven thttpd (≈8192). This work, now mostly of archaeological interest, has been open-sourced as mod_pubsub, which also includes a compatible select()-driven Python server I wrote the next year. It was also able to handle thousands of messages per second and thousands of concurrent clients.
There are at least four axes to your problem:
- How many concurrent connections do you mean when you say "a small chat server"? This could be anywhere from 2 to 2048.
- How many messages per second is it processing? This could be anywhere from 128 per connection (for something like Mumble) to 1/2048 per connection (if almost all clients are idle almost all the time).
- How big are these messages on average? This could be anywhere from 64 bytes to 64 mebibytes.
- What kind of hardware are you running it on? This could be anything from a 1MHz Commodore PET with an 8-bit 6502 and 16KiB of RAM to a 3.7GHz 12-core Ryzen 9 5900X with 256GiB of RAM. (I don't think anyone's written a chat server on Contiki, but it would be easy to do. It just wouldn't scale to very high loads.)
So, the answer to your question may vary by a factor of about 2⁷³, 22 decimal orders of magnitude. Can you give more detail on what you're thinking about?
A lot depends on your context. I haven't benchmarked darkhttpd, but it's probably significantly better than my httpdito (linked above), which can spew out 1.8 gigabits per second and at least 20000 requests per second on my 8-year-old 4-core 2.3-GHz amd64 laptop. An event-driven server like darkhttpd would be a much better basis for adding Comet functionality like chat—that's why, at KnowNow in 02000, we contracted Robert Thau to move our Comet functionality from Apache with Perl CGI (IIRC ≈64 concurrent connections and ≈64 chat messages per second on a 1GHz server with 1GiB of RAM) to his select()-driven thttpd (≈8192). This work, now mostly of archaeological interest, has been open-sourced as mod_pubsub, which also includes a compatible select()-driven Python server I wrote the next year. It was also able to handle thousands of messages per second and thousands of concurrent clients.
There are at least four axes to your problem:
- How many concurrent connections do you mean when you say "a small chat server"? This could be anywhere from 2 to 2048.
- How many messages per second is it processing? This could be anywhere from 128 per connection (for something like Mumble) to 1/2048 per connection (if almost all clients are idle almost all the time).
- How big are these messages on average? This could be anywhere from 64 bytes to 64 mebibytes.
- What kind of hardware are you running it on? This could be anything from a 1MHz Commodore PET with an 8-bit 6502 and 16KiB of RAM to a 3.7GHz 12-core Ryzen 9 5900X with 256GiB of RAM. (I don't think anyone's written a chat server on Contiki, but it would be easy to do. It just wouldn't scale to very high loads.)
So, the answer to your question may vary by a factor of about 2⁷³, 22 decimal orders of magnitude. Can you give more detail on what you're thinking about?