Seems a bit painful. HTTPS on the webserver itself is a fair bit more painful to setup and administer than HTTPS in your reverse proxy / loadbalancer (I prefer nginx). Web servers should support plain HTTP.
It is not a run time option by design, but it is there.
I want Kore to have sane defaults for getting up and running. That means TLS (1.2 default by only), no RSA based key exchanges, AEAD ciphers preferred and the likes.
I'm curious- why C? Strings, scoped objects and C++11 move operators seems much safer and clearer from an API perspective.
The complaints about C++ seem to mostly be around the ability to abuse the language, not specific issues that C solves. Something like https://github.com/facebook/proxygen seems like a better API.
And I don't quite buy portability- if it's not a modern compiler with decent security checks then I'm note sure it should be building web-facing code.
I've been building an HTTP/1.1 server in C++11. Along with a C++ wrapper around SQLite, I've been having a lot of fun putting some lightweight forum software together. I definitely enjoy the code structure and compile-time safety over PHP.
Using a threaded model with tiny stacks, and std::lock_guard for atomic operations.
The biggest downside is you have to run the same OS your server uses on your dev box (which is what I do); or you have to upload the source and compile the binaries on your server directly. (or have fun with cross-compilation, I guess.)
To answer the inevitable "why?" -- for fun and learning. Kind of cool to have a fully LAMPless website+forum in 50KB of code. Not planning to displace nginx and vBulletin at a Fortune 500 company or anything.
Still wishing I could do HTTPS without requiring a complex third-party library.
C++ is really horrible, everything feels like an after thought. The new unique_ptr and shared_ptr ends up with really ugly code, the concept is good but wow...
Can we move away from this horrible language already?
Actually C with a pool allocator is an excellent language for writing a web server. I wrote one in C 15 years ago, and the code is very elegant and simple:
It seems that people are so intimidated by the infamous complexity of C++ that they don't even want to bother getting more familiar with it.
So, although technically the existence of C doesn't make sense, as it is superseded by C++ (except couple of things), C is winning in the branding department.
Embedded systems. For example a surveillance camera could have a small web interface for configuring it and allowing remote access. Nowadays, even cameras have enough power to run real web servers with Ruby on Rails, but for smaller embedded systems, like a pacemaker, a web app written in C could make sense.
C is ripe for integrating with other higher level languages, and the no-fuss license encourages this. I'm looking forward to checking this out further. Good luck Kore!!
C++ is just one of the many failed attempts to improve on C. So, yes, C has its issues, but C++ is certainly not the solution.
The long story. There are three dominant Turing-complete axiomatizations: numbers (Dedekind-Peano), sets (Zermelo-Fraenkel) and functions (Church). The Curry-Howard correspondence shows that all Turing-complete axiomatizations are mirrors of each other.
If "everything is an object", it means that there must exist a Turing-complete axiomatization based on objects.
Well, such axiomatization does not exist at all. Nobody has phrased one. Therefore, object orientation and languages like C++ are snake oil. They fail after simple mathematical scrutiny. C++ is simply a false belief primarily inspired by ignorance.
Writing a web application in C sounds like a good trigger for an utterance from the Jargon File: "You could do that, but that'd be like kicking dead whales down the beach."
We've advanced the state of the art quite a bit with dramatically more expressive languages than C that are sufficiently efficient in terms of memory and CPU. This is especially true when communications are occurring over HTTP and not direct socket-to-socket comms.
Why use C instead of D, Rust, Go, C#, Java, Perl, Python, Ruby, Scala, Clojure, Erlang, Elixir, Haskell, Swift, OCaml, Objective-C...?
I didn't miss C++, it just seems a worse alternative than C.
> Why use C instead of D, Rust, Go, C#, Java, Perl, Python, Ruby, Scala, Clojure, Erlang, Elixir, Haskell, Swift, OCaml, Objective-C...?
Because C runs pretty much anywhere? There are plenty of platforms where C is available where I doubt you'd find any of the others above (e.g. C64; yes there are C compilers for them; yes, I'm mentioning it tongue in cheek)
Because you can generate small, compact static executables? E.g. I used to write network monitoring software and an accompanying SNMP server for a system with 4MB RAM and 4MB flash, the latter of which had to include the Linux kernel and a shell on top of the application in question. The system was so limited we did not run a normal init, and couldn't fit bash - instead we ended up running ash as the init...
There are plenty of use-cases where "web application" == "user interface for a tiny embedded platform".
C is a good solution for things like realtime multiplayer with lots of state, lots of side effects, etc. A lot of the modern abstractions actually get in the way, for example:
* List traversal order matters a lot when it's something like a list of monsters getting struck by a spell and the spell has complicated side effects. Brushing it under the rug with abstract iterators or functional Array.map's is a recipe for not knowing how your own game works.
* Realtime is an illusion, it really means "fast turn-based", you don't want players with fast connections to get an advantage by spamming commands and having them executed the instant they're received. You want to queue commands and execute them fairly at regular pulses. So much for all your abstract events infrastructure!!
* Certain object-oriented idioms become eye-rollingly silly when your application actually involves _objects_ (in the in-game sense). Suppose it's a game where players build in-game factories, suddenly the old "FactoryFactory" joke just got a million times worse.
I'm not saying C is the best for those sorts of applications, but it's certainly not bad, and a lot of modern language features just aren't appropriate.
Why use C instead of D, Rust, Go, C#, Java, Perl, Python, Ruby, Scala, Clojure, Erlang, Elixir, Haskell, Swift, OCaml, Objective-C...?
C and Rust are not in the same play field of Ruby, Python or PHP. These languages are typed, compiled and MUCH faster.
You'll obviously build 99% of your application in Ruby, but you might need C or Rust for high-volume calculations.
An example that happened to me a few weeks ago. Scaling a financial application to make millions of calculations. The core App is made with PHP, and the difference between 0.1sec and 0.000764sec gets important here.
This looks pretty neat actually. A ton of effort clearly went into it and it looks like the code is really well written with pretty well thought out interfaces
"Its main goals are security.."
Is it actually?
I also don't really see an advantage of using something like this over something like the Go net/http package.
Web-type API stuff is usually high enough level that something like C doesn't make sense. Go has nice enough standard packages for system things that even if I was doing a lot of system-y stuff I would be alright. I don't really see the type of work I would be doing where I want to use this.
The site and documentations looks well done, great job!
Architecture looks pretty interesting too. Wonder why was there a need for an accept lock? Ordinary accept() socket call already allows for simultaneous threads/process wait on a single socket.
Some fantastically quick points from a very cursory glance at the code. Feel free to ignore this.
- The code uses the convention to put the argument of return inside parentheses, making it look like a function call. This is very strange, to me.
- It treats sizeof as a function too (i.e. always parentheses the argument).
- It is not C99, which always seems so fantastically defensive these days.
- It's not (in my opinion) sufficiently const-happy.
- I saw at least one instance (in cli.c) of a long string not being written as a auto-concatenated literal but instead leading to multiple fprintf() calls. Very obviously not in a performance-critical place, so perhaps it's not indicative of anything. It just made me take notice.
I see you picked out the few things that I consistently hear on the coding style I adopted which is based on my time hacking on openbsd. I have no real points to argue against those as it is based on preference in my opinion.
I am curious why you arrived on it not being sufficiently constified however. I'll gladly make sensible changes.
As for the multiple fprintf() calls ... to me it just reads better and the place it occurs in is as you stated pretty obvious non performance critical.
Was quite excited to try out a little websocket server with Kore till I saw it fork's per connection. I don't really want 20k processes for handling 20k connections, I was really hoping for an event loop.
Evented io is great for extremely high concurrency, but that isn't always the right thing to optimize for. A forking web server might be faster for users depending on the application.
Lastly, you can't just have an event loop without also creating an entirely async platform. For an event loop to work well, all operations from file reading to network requests need to be completely async.
Unless you are optimising for space, there is no real reason to use C in IO bound processes (for which event loops are ideal); you may as well use Python (or even JS if you must) as your performance will be dominated by IO time.
There's always Vibe.d from the D programming language. Granted you would have to write in D, but most of the libraries of C are available. Concurrency is definitely accounted for since D supports it internally in the language. If you're seriously considering a 'native' approach to web development.
Writing high level C applications can be easy, if you use a library that frees you from using dynamic memory on typical data structures (e.g. strings, vectors, sorted binary trees, maps). I'm developing a C library for high-level C code, with hard real-time in mind, is already functional for static linking: https://github.com/faragon/libsrt
When an HTTP API is just an additional feature of a larger project, it may make sense to keep using C: a toolchain available everywhere and well known (including cross-compilation and full bootstrap), a small memory footprint, easy use of any library needed for the project.
I am doing a lot of that and will keep a look at Kore. Unfortunately, HTTPS only and non-evented core is a no-go for me.
I am currently relying on the web server embedded in libevent, as well as wslay for websockets and some additional code for SSE. To easily start a project, I am using a cookiecutter template: https://github.com/vincentbernat/bootstrap.c-web
Probably it would be better to have also a lib or dll, not just a program that render c/c++ servlets. It seems that all (that the kore executes from the feeded code) is running in servlet threads or at least started from servlet thread. Or it would be cool to add some "application" framework, not only "C-servlet" framework
If you think Kore is interesting, then also check out Tntnet <http://www.tntnet.org>. I've checked it out a few years ago and it felt good - stable, complete, easy to use etc.
Process per connection does not scale no matter how light weight, even with COW. kore's connection handling model is the reason apache2 mpm_prefork fell out of favor many iterations ago.
The only valid argument to avoid a single event based I/O is some sort of hard blocking I/O such as disk or non-queuing chardev.
However I'm still not biting, this is solved...and as usual the answer is somewhere in between. For example, in RIBS2 there are two models for connection handling, event loops for connections and "ribbons" for the non-queuing bits [1]. RIBS2 is also written in C for C.
> apache2 mpm_prefork fell out of favor many iterations ago.
By whom, exactly? There are still plenty of reasons to use a forking web server (see my other comment in this discussion). Saying it "does not scale" is misleading; even with a event-driven model there are only so many CPU resources that can be used to serve responses to clients.
Event-driven webservers are fantastic compared to forking ones, for keeping open many thousands of relatively idle connections (if that is your definition of "scale"). But many web services simply don't do that.
Preforking webservers, like event-driven ones, still have a rightful place in this world. As with all things technology, you have to pick the right tool for the job.
var kore = require("kore");
kore.on("request", http_request);
function http_request(req, resp) {
var statusCode = 200;
resp.write("Hello world", statusCode);
}
[+] [-] yuvipanda|10 years ago|reply
> Only HTTPS connections allowed
Seems a bit painful. HTTPS on the webserver itself is a fair bit more painful to setup and administer than HTTPS in your reverse proxy / loadbalancer (I prefer nginx). Web servers should support plain HTTP.
[+] [-] jvink|10 years ago|reply
$ make BENCHMARK=1
It is not a run time option by design, but it is there.
I want Kore to have sane defaults for getting up and running. That means TLS (1.2 default by only), no RSA based key exchanges, AEAD ciphers preferred and the likes.
edit: spelling
[+] [-] randomfool|10 years ago|reply
The complaints about C++ seem to mostly be around the ability to abuse the language, not specific issues that C solves. Something like https://github.com/facebook/proxygen seems like a better API.
And I don't quite buy portability- if it's not a modern compiler with decent security checks then I'm note sure it should be building web-facing code.
[+] [-] byuu|10 years ago|reply
Using a threaded model with tiny stacks, and std::lock_guard for atomic operations.
The biggest downside is you have to run the same OS your server uses on your dev box (which is what I do); or you have to upload the source and compile the binaries on your server directly. (or have fun with cross-compilation, I guess.)
To answer the inevitable "why?" -- for fun and learning. Kind of cool to have a fully LAMPless website+forum in 50KB of code. Not planning to displace nginx and vBulletin at a Fortune 500 company or anything.
Still wishing I could do HTTPS without requiring a complex third-party library.
[+] [-] techar|10 years ago|reply
Can we move away from this horrible language already?
http://harmful.cat-v.org/software/c++/ http://harmful.cat-v.org/software/c++/coders-at-work http://harmful.cat-v.org/software/c++/linus http://harmful.cat-v.org/software/c++/rms
[+] [-] rwmj|10 years ago|reply
http://git.annexia.org/?p=rws.git;a=tree
[+] [-] perdunov|10 years ago|reply
So, although technically the existence of C doesn't make sense, as it is superseded by C++ (except couple of things), C is winning in the branding department.
[+] [-] luikore|10 years ago|reply
[+] [-] bjourne|10 years ago|reply
[+] [-] bch|10 years ago|reply
[+] [-] swalsh|10 years ago|reply
[+] [-] tim_sw|10 years ago|reply
[+] [-] pjmlp|10 years ago|reply
But I doubt such tiny processors are being used in boards with network capabilities.
[+] [-] gizi|10 years ago|reply
The long story. There are three dominant Turing-complete axiomatizations: numbers (Dedekind-Peano), sets (Zermelo-Fraenkel) and functions (Church). The Curry-Howard correspondence shows that all Turing-complete axiomatizations are mirrors of each other.
If "everything is an object", it means that there must exist a Turing-complete axiomatization based on objects.
Well, such axiomatization does not exist at all. Nobody has phrased one. Therefore, object orientation and languages like C++ are snake oil. They fail after simple mathematical scrutiny. C++ is simply a false belief primarily inspired by ignorance.
[+] [-] unixprogrammer|10 years ago|reply
[+] [-] j-pb|10 years ago|reply
Unless you write on an embedded system, a game, or a high performance number crunching application, C is premature optimisation.
And even in the above we see drastic changes today, embedded systems have become so powerful that they can run scripting languages (http://www.eluaproject.net), game engines are written in C and scripted with other things (http://docs.unity3d.com/ScriptReference/), and inmemory-bigdata systems like spark offer significant advantages over classical HPC frameworks like MPI (http://www.dursi.ca/hpc-is-dying-and-mpi-is-killing-it/).
While JS is horrid, it at least doesn't have manual memory management.
[+] [-] Dewie3|10 years ago|reply
On the other hand, there is a fair bit of negativity in this thread, just because it is C. That might not be in the hacker spirit, so to speak.
[+] [-] slowmovintarget|10 years ago|reply
We've advanced the state of the art quite a bit with dramatically more expressive languages than C that are sufficiently efficient in terms of memory and CPU. This is especially true when communications are occurring over HTTP and not direct socket-to-socket comms.
Why use C instead of D, Rust, Go, C#, Java, Perl, Python, Ruby, Scala, Clojure, Erlang, Elixir, Haskell, Swift, OCaml, Objective-C...?
I didn't miss C++, it just seems a worse alternative than C.
[+] [-] vidarh|10 years ago|reply
Because C runs pretty much anywhere? There are plenty of platforms where C is available where I doubt you'd find any of the others above (e.g. C64; yes there are C compilers for them; yes, I'm mentioning it tongue in cheek)
Because you can generate small, compact static executables? E.g. I used to write network monitoring software and an accompanying SNMP server for a system with 4MB RAM and 4MB flash, the latter of which had to include the Linux kernel and a shell on top of the application in question. The system was so limited we did not run a normal init, and couldn't fit bash - instead we ended up running ash as the init...
There are plenty of use-cases where "web application" == "user interface for a tiny embedded platform".
[+] [-] xamuel|10 years ago|reply
* List traversal order matters a lot when it's something like a list of monsters getting struck by a spell and the spell has complicated side effects. Brushing it under the rug with abstract iterators or functional Array.map's is a recipe for not knowing how your own game works.
* Realtime is an illusion, it really means "fast turn-based", you don't want players with fast connections to get an advantage by spamming commands and having them executed the instant they're received. You want to queue commands and execute them fairly at regular pulses. So much for all your abstract events infrastructure!!
* Certain object-oriented idioms become eye-rollingly silly when your application actually involves _objects_ (in the in-game sense). Suppose it's a game where players build in-game factories, suddenly the old "FactoryFactory" joke just got a million times worse.
I'm not saying C is the best for those sorts of applications, but it's certainly not bad, and a lot of modern language features just aren't appropriate.
[+] [-] csomar|10 years ago|reply
C and Rust are not in the same play field of Ruby, Python or PHP. These languages are typed, compiled and MUCH faster.
You'll obviously build 99% of your application in Ruby, but you might need C or Rust for high-volume calculations.
An example that happened to me a few weeks ago. Scaling a financial application to make millions of calculations. The core App is made with PHP, and the difference between 0.1sec and 0.000764sec gets important here.
[+] [-] nulltype|10 years ago|reply
[+] [-] perdunov|10 years ago|reply
This is such a retarded opinion. A person expressing it probably doesn't know shit about C++ or just plainly an idiot.
[+] [-] ejcx|10 years ago|reply
"Its main goals are security.."
Is it actually?
I also don't really see an advantage of using something like this over something like the Go net/http package.
Web-type API stuff is usually high enough level that something like C doesn't make sense. Go has nice enough standard packages for system things that even if I was doing a lot of system-y stuff I would be alright. I don't really see the type of work I would be doing where I want to use this.
[+] [-] rdtsc|10 years ago|reply
Architecture looks pretty interesting too. Wonder why was there a need for an accept lock? Ordinary accept() socket call already allows for simultaneous threads/process wait on a single socket.
[+] [-] luikore|10 years ago|reply
[+] [-] unwind|10 years ago|reply
Some fantastically quick points from a very cursory glance at the code. Feel free to ignore this.
- The code uses the convention to put the argument of return inside parentheses, making it look like a function call. This is very strange, to me.
- It treats sizeof as a function too (i.e. always parentheses the argument).
- It is not C99, which always seems so fantastically defensive these days.
- It's not (in my opinion) sufficiently const-happy.
- I saw at least one instance (in cli.c) of a long string not being written as a auto-concatenated literal but instead leading to multiple fprintf() calls. Very obviously not in a performance-critical place, so perhaps it's not indicative of anything. It just made me take notice.
[+] [-] jvink|10 years ago|reply
I see you picked out the few things that I consistently hear on the coding style I adopted which is based on my time hacking on openbsd. I have no real points to argue against those as it is based on preference in my opinion.
I am curious why you arrived on it not being sufficiently constified however. I'll gladly make sensible changes.
As for the multiple fprintf() calls ... to me it just reads better and the place it occurs in is as you stated pretty obvious non performance critical.
[+] [-] dang|10 years ago|reply
[+] [-] windlep|10 years ago|reply
[+] [-] jvink|10 years ago|reply
It uses an event driven architecture with per CPU worker processes. The number of workers you have can be controlled by the config.
[+] [-] altano|10 years ago|reply
Lastly, you can't just have an event loop without also creating an entirely async platform. For an event loop to work well, all operations from file reading to network requests need to be completely async.
[+] [-] luikore|10 years ago|reply
| Event driven architecture with per CPU core worker processes
So each process should be able to handle a lot of concurrent connections, just like nginx.
And I tried the websocket example, and saw only the first worker process responding whenever a websocket is created.
[+] [-] alexchamberlain|10 years ago|reply
[+] [-] giancarlostoro|10 years ago|reply
[+] [-] rezacks|10 years ago|reply
[+] [-] faragon|10 years ago|reply
[+] [-] vbernat|10 years ago|reply
I am doing a lot of that and will keep a look at Kore. Unfortunately, HTTPS only and non-evented core is a no-go for me.
I am currently relying on the web server embedded in libevent, as well as wslay for websockets and some additional code for SSE. To easily start a project, I am using a cookiecutter template: https://github.com/vincentbernat/bootstrap.c-web
[+] [-] jvink|10 years ago|reply
https://github.com/jorisvink/kore/tree/master/examples/sse https://github.com/jorisvink/kore/tree/master/examples/webso...
[+] [-] bubbba|10 years ago|reply
[+] [-] danieltillett|10 years ago|reply
[+] [-] delinka|10 years ago|reply
[+] [-] maciej|10 years ago|reply
[+] [-] zippie|10 years ago|reply
The only valid argument to avoid a single event based I/O is some sort of hard blocking I/O such as disk or non-queuing chardev.
However I'm still not biting, this is solved...and as usual the answer is somewhere in between. For example, in RIBS2 there are two models for connection handling, event loops for connections and "ribbons" for the non-queuing bits [1]. RIBS2 is also written in C for C.
[1] https://github.com/Adaptv/ribs2/blob/master/README
Edit - mention RIBS2 is also written in C
[+] [-] jvink|10 years ago|reply
It uses per cpu worker processes which multiplex I/O over either epoll or kqueue.
[+] [-] otterley|10 years ago|reply
By whom, exactly? There are still plenty of reasons to use a forking web server (see my other comment in this discussion). Saying it "does not scale" is misleading; even with a event-driven model there are only so many CPU resources that can be used to serve responses to clients.
Event-driven webservers are fantastic compared to forking ones, for keeping open many thousands of relatively idle connections (if that is your definition of "scale"). But many web services simply don't do that.
Preforking webservers, like event-driven ones, still have a rightful place in this world. As with all things technology, you have to pick the right tool for the job.
[+] [-] bitL|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] z3t4|10 years ago|reply
[+] [-] talnet|10 years ago|reply