Yes, I thought the same - so you have to write a server. What about locking? roll your own I suppose. Multi user - roll your own? What about hot backups? Rollbacks?
I always conclude these things are advocated by people who have no experience in large multi user systems. The same as the NoSQL movement. They'll eventually build a database server. They build a system using the cool thing which works fine when they test it on their single user system. Go live - aagghh what's happening, why are all these people trying to access my data simultaneously? and so on.
One I'll always remember was when XML was the next big thing - they decided to store the raw XML in a database. It was a commercial product, and we were interfacing to it from our system. Once we found out this we started asking questions - no no it works fine we were told, laughing at us old database guys. Went live couldn't handle 5 TPS - what a surprise, it never worked as far as I'm aware. There is this continuous circle of databases are bad, no no do this you don't need to do this, no things have changed - what do you database guys know. Its entertaining to watch if nothing else, my advice - learn SQL, and some database tuning, its not that hard, at least compared to writing your own database engine.
It may be ironic but there are benefits to rolling your own. Having your persistence layer talk in terms of your business models instead of raw SQL could be seen as a benefit in some contexts. The process which is the "server" can be written to allow for relaxed consistency based on specific business operation being performed (e.g. no need for transactions around log entries). This could be used to leverage substantial performance gains. The benefits afforded by guarantees of exclusivity between application and database are difficult to overstate.
These advantages of purpose-built functionality would also extend into the arena of handling replication and clustering. I.e. multiple distributed processes each with independent databases synchronized via some custom protocol that operates in terms of specific business models and processes.
you can just turn off these things if you so desire - in reality the overhead is a lot less than you'd think. You'd be way better off spending the time you'd use to write half a database to optimising your application and just use a database. Or maybe you don't really need a database.
You don't need to write a server to use it. You need to recognize you're using the wrong tool to solve your problem.
I've used SQLite as an embedded database, as a log file, it's even possible to use it as a virtual filesystem in tcl starkits. But a high demand, multiaccess data solution it is not. Yes you can make it work but you need to justify the costs of doing all that server work when you could just use a SQL server that already meets your needs.