We are currently using Lua in an embedded system to define user interaction. The embedded application is a rather complex thing with a screen and a few hundred buttons, LEDs, encoders and faders, so user interaction can get messy. It all has to run on a single Blackfin processor.
Hence, there are serious memory constraints and processing power is very limited, too. But tell you what: Lua (no jit) does just fine.
User interaction is pretty complex. Most buttons have different functionality depending on context. Button presses can result in the screen content changing, LEDs changing, even motorized fader movements... More often than not, several dozens of hardware elements have to updated as a result of a single button press. Implementing all that in C++ would be a mayor pain.
We implemented a minimal framework in Lua that just barely looks like an object oriented system. It gets called with updates from the hardware ("Button A has been pressed") and responds by sending similar messages back ("Move Fader X to position Y"). Wrapping all that in some kind of convenient object system only took about 100 lines of Lua code. Interfacing that with the codebase took about 200 lines of C++ code.
We also use Lua to do string processing and value conversion ("display element D wants to display value F, but represent it on a logarithmic scale and convert it to SI units"). Implementing this can be done in something like 20 lines of C++.
So, a call into the Lua code will take a couple hundred cycles (yes, we have to count cycles), sometimes even a couple dozen thousand cycles. But for user interaction, that is perfectly fine. Some well-optimized C code could probably do the same thing quite a bit faster, but that would take ages to write and debug. (Plus, the bottleneck is not the Lua code. Loading images from disk, or moving faders takes considerably longer)
So, we are really happy with Lua.
But where Lua really shines is in its adaptability. The Blackfin does not support double precision math natively but Lua uses double as its standard data type. Well, you only have to change one #define, and Lua uses float instead. The Blackfin does not have an MMU, so we have to do some memory management manually. Well, you only have to change this one function and Lua will be fine with that.
A mixing console. The Lua is considered part of the software. It will not be exposed to the end user. It will however make it trivial for us to do quite complex modifications of the software without recompiling.
For example, displaying feet instead of meters is a trivial matter if all you have to change is some text/Lua files.
I also have to ask, with knobs and faders and sliders, is he making a synth or console or something? I've done something similar, although it never made it to market .. and I am very curious about how viable this is.
Of course, its viable to use Lua on a blackfin - what I meant was, what is your product? It sounds very interesting and I'd be happy to know of other examples of pro-audio gear using Lua and scripting languages in general, paired with a C/C++ core .. having built a few myself. :)
Ah, well from the perspective of someone interested in your product, I hope you'll give me enough hints that I'll recognize it when I see it on the market!
I find myself having a love/hate relationship with Lua where I work. We have a solid in-house C++ game engine, and like good little agile programmers we exposed a good portion of it directly to Lua so we could prototype and get stuff up and running quickly.
Now it has reached a point where some people are writing their entire game in Lua and it just ends up becoming one impossible to debug mess. This wouldn't be as much of a problem if there were a decent, cross-platform remote debugger available -- but I have yet to find one that doesn't rely on doing silly things like DLL injection (Decoda). I would pay good money, like >= $500 a seat, for something that Just Worked and met the needs I listed above.
yeah, I am having this issue as well. I wouldn't call my engine the most solid thing in the world, and I set out with the expressed intent of writing my game as much in lua as possible.
I have resorted to having a lua repl that can execute in realtime so I can just chuck in data and examine my objects that way. It would be nice to have a breakpoint debugger though.
We embed Lua and make use of it to act as rich glue on Windows. We wrote our own COM binding which is much simpler and faster than something like LuaCOM because we don't use all of the features of COM (no installed classes / CoCreateInstance). It sounds simple on its face, but once you have a rich binding it can become complicated quickly as more and more code is written in the system and parts of code do things that you wouldn't expect or can't easily detect. We wound up writing our own debugger built into the embedding so that at any time we could pull up a console and interact with the engine and trace what was going on. Tools like that should be considered a necessity whenever you're creating an embedded script system inside a native code app.
When you're wrapping any ref-counted system into a GC system (e.g. Javascript wrapping gobject) there is the potential for great disaster, either ref loops or mem corruption as discussed in other posts here. It just takes training and discipline to make sure something you're working on is built correctly according to the rules of the system.
GC time has come up a bit as an issue, but it is mostly because our system is not linear in the sense that a game is. There are distinct states and times when the user is waiting and when they are not and it is an inopportune time to GC while in some specific states because it is perceived more by the end user. It is a balancing act to determine whether running the GC manually in the desired states provides a perceived or real difference than letting the engine run it whenever.
Just try this one, it shows some raw speed of luajit:
./luajit samples/intro.lua
Should work on Windows, OSX, Linux (x86 and arm (EfikaMX))
As long as things are kept in lua much of the time, and rarely call "C" for the better. And if "C" is to be called, it better be called through the FFI - pure awesomeness...
I'm currently working on a game engine that is a mix of C++ and Io. Here are some things I learned:
- By far the most time spent in cross-language calls is in functions that take two integers and return a boolean. What's up with that? Enums! The C++ 3D engine I'm using uses enums, and my binding library represents enums in script with an object that overloads the == and != operator. I really need to get around to optimize that.
- Garbage collection is not slowing me down (much). I only spend 10% of my time in GC functions for the script (and I have a lot of scripts running). Once I accidently left a call to "Collector collect" in my game loop, then GC took 40%. But, Io's garbage collector didn't make the game jerky, even with a collect in my game loop.
- There are some things that are better to write in C++ than in script. I don't mean, better from a performance standpoint, I mean there are some things that are quicker to implement in C++ than in script. I started with the idea, "I'm going to write everything in Io and optimize later" and I made a lot of bindings to C++ function, both high and low level, to support that goal. But some things like dealing with triangle meshes or bit twiddling do better in C++. Scripting is faster to implement ideas in for so many things, but in places where C++ shines it is still faster to do in C++.
- When you make memory errors in C++ they generally cause a crash in Io's garbage collector. I once lost a month debugging and instrumenting every line of Io's garbage collector to figure out why it was freeing objects still in use only to discover the actual problem was a C++ library was squirreling away pointers to objects it didn't own and reading through the pointers later when the objects were not guaranteed to still be live (even in a pure C++ program).
- The chain of events where a memory error in C++ blows up in the script garbage collector goes like this: a C++ object gets deleted, but another C++ object still has a pointer to it. When the script asks all the C++ objects to mark the objects they're using (Io uses a mark-sweep collector), the live C++ object holding a rogue pointer calls the mark() function on a deleted object. But the call succeeds because the deleted object hasn't been completely overwritten. But the the corpse of the deleted object in turn holds references to script objects which it has not been marking previously because this C++ object didn't exist anymore when the sweep started. The script objects it used hold have just been collected, but it still knows their old addresses. It asks the script interpreter's garbage collector, hey, go mark script objects for me at these addresses. The script's garbage collector then tries to mark script objects that aren't live, follows the graph of dead script object's references to other script objects, also dead, and eventually reaches a bum pointer. Blammo! The GC just blew up deep in a forest of script objects, and the poor script language gets blamed for something C++ did.
First, because I knew Io and not Lua. I had gotten interested in Io years before. Io was actually my first dynamic language, not Python or Ruby or Javascript. I stopped using Io for a few years when I moved on to other things, but _why's article "Io Has a Very Clean Mirror" brought it back to mind.
I seriously considered Lua but I felt object orientation needs to be incorporated very fundamentally in a language. I had previous experience with mIRC which shares Lua's idea of "everything is a table", and while mIRC is no doubt productive I felt tables would be a step backwards compared to Io's objects.
At the time I started, the C/C++ binding frameworks for Lua were not as mature as they are now, not as numerous, or I simply wasn't aware of them.
From a language aesthetic standpoint my decision came down to Io, or Ruby, but I looked at the C Ruby source code and it didn't seem very suited to embedding. For instance the official implementation of Ruby had some dependence on global/static variables which precluded the idea of running multiple instances of the VM in a process. Io allows this, although I haven't used it yet; I plan to use it to support threading, one VM per thread. And basically the Ruby interpreter codebase wants to own main() and call my code as a library; I wanted to own main() and call the interpreter as a library. Io supports this, and it's codebase is smaller.
C++ function to compare two enums:
bool operator ==(SomeEnum left, SomeEnum right) { return left == right; }
In my bindings (in C++):
LM_BIND_NONMEMBER_OP(SomeEnum, (==))
In Io script:
if( obj property == SomeEnumInstance, doStuff())
The == in the script calls the overloaded == operator defined in C++, via the binding. Turns out, you do a lot of this in game programming, at least with Irrlicht (or maybe it's a quirk of what I'm doing in my game scripts). Testing an enum for equality is cheap in C++. Calling == from script via a binding adds an order magnitude (or two!) more overhead. I could improve this by optimizing for this case.
(You may wonder - why I don't just convert enums to script numbers, which surely are faster. Mainly because I wrote my Io to C++ binding library to be strongly typed; it does not allow mixing a number with an enum and if you auto-demoted enums to numbers in script that type information would be lost. And I don't think Io's interpreter goes to the same lengths as, say, a Smalltalk does to optimize for numbers; Io takes an even more purist approach to treating everything as an object.)
He's making a C API call to essentially compare two integers instead of just comparing them directly in Lua. You can fix that by pushing the enum values to Lua as numbers instead of userdata with metatables for comparisons.
Well it's Io but you're still correct, I could fix that by letting enums "decay" into Numbers. In fact, I originally had that happen accidentally and I actually considered it a bug. The reason I considered that a bug was that I made a design decision to strongly type everything that goes across the boundary between script and C++.
To summarize, use the FFI library (http://luajit.org/ext_ffi.html) which enables LuaJIT to directly access C types and functions, bypassing the Lua C API. Or, write more of your code in pure Lua, reducing usage of the Lua C API bridge.
Mixed-langage projects can be incredibly powerful (e.g. NumPy doing the heavy lifting in C and the rapid development in Python) but it requires taste and experience to decide which bit of the system belongs on which side. And it's subtly but importantly different to just writing a binding to a C library.
Right. I wasn't referring to the mixed language approach, rather I meant the overengineering he's talking about. Things like FactoryFactories can be understandably infuriating to work with.
I have a confession to make..
I have a very strong feeling ... the third world developer Mike speaks about could be me...
Well.. I was thrown into this new project .. where I had to speed up some Lua + C code, show some results in a couple of days and I had absolutely no clue about Lua..
Anyways that's no excuse for troubling you, Mike....
I am sorry.
Hence, there are serious memory constraints and processing power is very limited, too. But tell you what: Lua (no jit) does just fine.
User interaction is pretty complex. Most buttons have different functionality depending on context. Button presses can result in the screen content changing, LEDs changing, even motorized fader movements... More often than not, several dozens of hardware elements have to updated as a result of a single button press. Implementing all that in C++ would be a mayor pain.
We implemented a minimal framework in Lua that just barely looks like an object oriented system. It gets called with updates from the hardware ("Button A has been pressed") and responds by sending similar messages back ("Move Fader X to position Y"). Wrapping all that in some kind of convenient object system only took about 100 lines of Lua code. Interfacing that with the codebase took about 200 lines of C++ code.
We also use Lua to do string processing and value conversion ("display element D wants to display value F, but represent it on a logarithmic scale and convert it to SI units"). Implementing this can be done in something like 20 lines of C++.
So, a call into the Lua code will take a couple hundred cycles (yes, we have to count cycles), sometimes even a couple dozen thousand cycles. But for user interaction, that is perfectly fine. Some well-optimized C code could probably do the same thing quite a bit faster, but that would take ages to write and debug. (Plus, the bottleneck is not the Lua code. Loading images from disk, or moving faders takes considerably longer)
So, we are really happy with Lua.
But where Lua really shines is in its adaptability. The Blackfin does not support double precision math natively but Lua uses double as its standard data type. Well, you only have to change one #define, and Lua uses float instead. The Blackfin does not have an MMU, so we have to do some memory management manually. Well, you only have to change this one function and Lua will be fine with that.