Hacker News new | past | comments | ask | show | jobs | submit login
The Toit language is now open source (toit.io)
218 points by tosh on Nov 22, 2021 | hide | past | favorite | 101 comments



Knowing Lua, what advantages would I see if I tried using Toit instead? what disadvantages? how about eLua in particular? how about OpenResty with LuaJIT maybe? (I'm asking honestly to try to understand, including which level of "IoT" it's trying to target, which I still am lost at, even after trying to browse to toit.io - which only added to my confusion suddenly seeing "containers" and ESP32 mixed in the same sentence there.)

How about vs. ye olde Java VM running on your SIM-card? (!)


I notice that one of the differences is licensing. "The Toit compiler, the virtual machine, and all the supporting infrastructure is licensed under the LGPL-2.1 license."

Lua - MIT license eLua - MIT license OpenResty - BSD style licenses

Could LGPL license limit it's growth and popularity?


Maybe. Some developers have an aversion to the GPL family because of its “viral” nature. An MIT licensed project can be used in an LGPL one, but not the other way around. However, many projects are successful despite being GPL licensed. The Linux kernel is a major one, and Marlin FW (for 3D printers) has had its license used a few time to force Chinese 3D printer makers to release their source.

There’s a lot of misconceptions and FUD about the GPL family. So, as a clarifier: The LGPL does not restrict what the user can do with a product;[a] it only limits other developer’s. If another program wants to integrate Toit (or any LGPL based thing) into their product, they need to abide by the LGPL’s linking and open source[b] provisions. But a user can use an LGPL or GPL licensed product without restriction.

[a]: Despite what some developers may claim

[b]: It’s more “source available” but for the users only. Open source is just one way to fulfill that requirement


This captures the spirit of the gpl, while getting the actual terms of the LGPL wrong. All FSF licenses are about ensuring end user freedom, however in the context of the LGPL, it only restricts a developer if they are directly modifying the code of the library itself, in the case of toit that would be about modifications to the language itself. Footnotes would be more useful linking to actual information so here’s one [a]

For reference gcc is GPLv3 but that doesn’t obviously apply to inputs/outputs of the program that same reference is in the context of a language like toit. [b]

[a] https://www.gnu.org/licenses/gpl-faq.en.html#LGPLv3Contribut...

[b] https://www.gnu.org/licenses/gpl-faq.en.html#CanIUseGPLTools...


Unless the entirety of the built binary is free of the LGPL, then I can tell you without hesitation that the answer is _definitely_.


Short answer no, longer answer, LPGL is generally fine its a library license intended to enable usage, where only direct modifications to the underlying library need to be conveyed (triggered on distribution). Its not the same as GPL, AGPL where distribution are considered via aggregate work or access vectors respectively. IANAL, but I’ve spent plenty of time educating them ;-)


> Could LGPL license limit it's growth and popularity?

Not really. A lot of people license their code as BSD in the hope that some massive company picks it up and they either get famous or a fat consulting contract. In reality what we've seen e.g. Amazon do is just copy the idea for themselves and pay their internal engineers. GPL gives you more force, you can get money out of them to remove its restrictions.


> Could LGPL license limit it's growth and popularity?

No.


Out of the box containers OTA updates, multiple containers support, container reboot logic, garbage collected language (Lua is too), designed for not bricking the devices, etc... A lot of usual problems in IoT that you'd have to solve by yourself are already solved for you and you just need to focus on writing applications. I believe it's a very powerful abstraction for IoT developers.


Programming for Arduino in C/C++ has been the worst most unproductive endeavor I ever undertook as a software developer. I have no idea how people write whole browsers and operating systems in this language.

Rewrote everything in CircuitPython after 1500 loc of cpp madness, and living happily thereafter. I had to switch to a more powerful board though, it would be great to have lower requirements. Adafruit's M0 adalogger board has been perfect for me in every way except it does not have enough flash for CircuitPython.

Toit looks funky, but nice. They seem to already have a decent start at the ecosystem, with very basic libraries like drivers for HD44780 already made. Doesn't have everything that I need yet, but looks very promising.

I'm working on a flight controller, not IoT, so I don't need their serviceability API, but that's an interesting way to monetize. If I needed that, I can definitely see it being cheaper to pay the fee than to develop stuff like a reliable OTA update system myself.


Agreed, C/C++ can sometimes be a real pain to work with. I went a slightly different route however after seeing how much less I could do with an equally powerful board running MicroPython and settled on Nim. It has a syntax that looks familiar to Python, but it's compiled through C/C++ so every board I could run C/C++ on I could run Nim on. It also meant I didn't have to rewrite all the libraries.

Recently I've been digging into creating a pure Nim ecosystem for microcontrollers after discovering just how much overhead Arduino and other generic C approaches are. The benefit Nim has here is that it is incredibly strong at compile-time execution. This means that I can write succinct Python-looking code which compiles to tiny binaries with great performance.

My biggest project in it so far is a keyboard firmware for a keyboard split in half. The whole project compiles down to about 4Kb with all the code for the port expanders and the layouts and special key macros I use. For comparison the Arduino code to blink an LED over one such port expander was about 3.8Kb and MicroPython can't even compile a hello world for the board I'm using.


I wonder if Nim is better or worse suited for low-powered boards than TinyGo. Zig is probably not mature enough and insufficiently safer than C++ to satisfy you and your parent post's requirements though.


TinyGo has a list of explicitly supported devices. Nim can run on anything you can compile C for. I also can't see some of the more low-powered hobby devices such as the Digispark (Attiny85) or similar. Not sure if that's because TinyGo can't be run on them, or whether or not it's just no-one who has written a target for it yet.

One huge benefit Nim has in this category is that it compiles to C. If your board can run C, it can run Nim. And don't let the Nim GC fool you, it can be disabled, or switched to the more modern ARC version which runs fine on microcontrollers (and if you don't use garbage collected memory it doesn't add any code to your project). Since it compiles to C you can easily wrap libraries and compile for pretty much anything.

Zig can at least also wrap C code super easily, but I'm not sure how the overhead and targeting is for it.


Sounds great, are there any project pages towards this initiative?

I'm also sceptical of HHLs on restricted devices i.e. mcus, but Arduino seems like the only game in town atm.


Nothing about the entire ecosystem I was talking about. But my initial work on the keyboard firmware can be found here: https://github.com/PMunch/badger/tree/final. There are many different projects in Nim running on microcontrollers though, but not something on a common ecosystem.

HHL?


> I had to switch to a more powerful board though

And therein lies the rub. If you can throw more computational power and resources at your problem, then you should absolutely go for something with simpler abstractions that make better use of your time.

The problem is however not with C or C++, but rather inexperience in how to manage the control/flexibility these languages allow. You now have 50 ways to shoot yourself in the foot, instead of the otherwise handful. And, you also get the choice of a few bazookas.

> I have no idea how people write whole browsers and operating systems in this language.

They use best practices that avoid all of the aforementioned 50 ways.


Can confirm, foot gunned myself over and over even though I'm not doing anything complicated. Couldn't figure out the last memory corruption trap I fell into, had to abandon ship.

I'm honestly surprised how well regarded Arduino software stack is given that its target audience is not exactly seasoned C++ developers. I guess most casuals have very simple programs, or just go straight for micropython / circuitpython.


C++ memory management is hard, and best practices don't solve all the problems you encounter. (Note that I have more experience with desktop programs, like "browsers and operating systems", than embedded boards.)

Managing tree-shaped allocation trees is possible but mistake-prone, and Rust includes a sound checker to catch mistakes. (Rust pushes users towards using "writable xor aliased" pointers which is integral to this sound checker. "Writable xor aliased" is one of Rust's biggest strengths and weaknesses, and differs from both C++ and GC'd languages.)

Managing intrusive linked lists is common in C (I have less C experience than C++) and probably tractable as long as you don't carry around pointers which may dangle. Rust doesn't help write correct intrusive code, and worse yet Stacked Borrows (a proposed set of rules for pointer aliasing) interferes with attempts to write intrusive collections in unsafe code (https://gist.github.com/Darksonn/1567538f56af1a8038ecc3c664a...). Maintaining intrusive sorted-tree maps and such is probably possible in C, but perhaps even trickier to get right (note that I've never built one myself).

I try to avoid aliased object graphs (multiple objects hold pointers to another object) when architecting greenfield projects, and it's often (but not always) possible. When I encounter pointer aliasing anyway (often in existing codebases), I usually fall back to shared_ptr (reference counting) because manual memory management requires borderline-intractable global reasoning to make sure every possible codepath never uses an aliasing pointer after the object has been freed through another pointer. (This is especially difficult on code I'm not wholly familiar with, because I either didn't build it myself or forgot the details.)

And if you have reference cycles, shared_ptr leaks memory, so you need to manually clear shared_ptr on destruction (error-prone and risks leaking), use weak_ptr (runtime overhead and boilerplate on every read, not just creation/destruction), or use raw pointers (error-prone and risks memory unsafety). Rust doesn't help you with reference cycles either, sadly (Rust makes mutating shared references painful, Rc leaks memory, and Weak has runtime overhead and boilerplate, but at least Rc/Weak aren't atomic like shared_ptr). A garbage collector makes your life a lot easier, at the cost of runtime pauses (often higher throughput than refcounting, but slowdown occurs in bursts).

----

C++ is intractable to write correct threaded code in. The language doesn't tell you whether a T& is visible from one or multiple threads (which determines whether you need to acquire a mutex to access it). And if 1 object has methods A and B both exposed through a public API (accessible from multiple threads), but method A (which acquires a mutex) calls method B, then method B must acquire the mutex when called from outside, but cannot acquire the mutex when called by method A unless it's a recursive mutex. The alternative is to make neither A nor B acquire the mutex, and require callers to acquire the mutex beforehand (which they may forget to do). I've written more about this at https://write.as/nyanpasu64/c-mutexes-are-easy-to-misuse-and....

I honestly don't think Python and Java are that much more tractable to write correct threaded code (note that Python has a global interpreter lock, and I have limited experience with Java threading). Neither distinguishes references to an object visible to one thread, from references to an object visible to multiple threads. And neither can force users to acquire a mutex before accessing an object visible to multiple threads, though Java's synchronized methods use recursive locks which can avoid the method A/B problem above. Also the consequences for threading errors are slightly less severe (corrupted data and exceptions instead of segfaults).

Rust is tractable to write correct threaded code. Its type system distinguishes &T (sometimes shared between threads, usually read-only), &Mutex<T> (cannot access T without taking a lock), and MutexGuard<T> and &mut T (obtained when you acquire a lock, provides unique access to T while you hold the mutex). This ensures you'll never forget to acquire a mutex, and have to go out of your way to try to acquire a second lock to a mutex you already own.


C++ is a weapon of last resort.

It’s very powerful if you need it, but if you don’t, it’s a lot more cognitive complexity for little gain.


> Programming for Arduino in C/C++ has been the worst most unproductive endeavor I ever undertook as a software developer. I have no idea how people write whole browsers and operating systems in this language.

It would be more straightforward to say "I have no idea how people write whole browsers and operating systems."


> I have no idea how people write whole browsers and operating systems in this language.

They are software developers.


Have you can confirmed circuit python is suitable for the real-time control and latency requirements of this use? I'd assumed it wouldn't be suitable due to speed etc.


I checked it in practice, and it seems fine. Reading and parsing a 115200 baud telemetry stream from the ESC is the most expensive part of the whole thing, but even still, I split it up in a way that caps the amount of work that this subsystem is allowed to perform per loop, so all other parts of the loop get a chance to execute very often.

The GC by default only collects garbage once you don't have enough memory for the next allocation, I thought I might need to adjust that, but even though it triggers regularly in my code, it never seems to cause a noticeable slowdown. Despite "flight controller" sounding realtimey, the code is resilient to some reasonable latency.

One potential concern is that CircuitPython does not support user defined interrupts, but I haven't needed that personally. And I think you can still have those if you drop into C? Not sure.

At any rate, if it does get slower as I add more code, for me it's easier (and safer) to deal with that, than with subtle memory corruption foot guns in cpp.


Programming Arduino is the only time I willingly use C++... it's a lot more annoying in other contexts. But still better than some languages.


> more than 30x faster than MicroPython on an ESP32

Based on a quick look at the file “interpreter_run.cc”, Toit’s VM appears to be a stack-based interpreter like MicroPython’s. It would be interesting to know what techniques are being used that enable Toit to be so much faster.


Among other things, Toit uses the selector-based row displacement technique for building a compact method dispatch table that can be queried in constant time.

If you're into fast virtual method calls - also for dynamic languages - you might want to take a look at Karel Driesen's excellent PhD dissertation on the subject:

https://cs.ucsb.edu/sites/default/files/documents/1999-24.ps


Oh, and full disclosure: I'm the author of the blog post.




Some, you know, _actual benchmarks_ might be nice too. The claims are certainly possible, but somewhat hollow without evidence...


Given that this stack costs $6/year, running on a $2 controller, it seems somewhat expensive, unless it is providing some sort of advantage/benefit. However, the examples on Github aren't complex enough to show what is the advantage of this ecosystem.

Perhaps a much more detailed blog post going over an example would help us understand why this stack is useful? Since you appear to have paying customers, clearly, there is an advantage, but it is not immediately obvious.


The submission is about the language, which they just open-sourced and thus doesn't cost anything.


Yes but may not be worth learning if it gets orphaned once the sugar daddies realize selling cloud service subscriptions to $2 sensors is a naff idea. A community could pick it up, but doesn't work every time with open but abandoned VC-ware.


What happens if the devs making a device stop paying? Do the customers' devices stop working?

Also, that seems like an unacceptably steep BOM item for a consumer device. The MCU itself costs less in many cases over reasonable device lifetimes.


It is worth noticing that you enable and disable the serviceability for individual devices when you want to. Your unserviced devices are free and keep running your code.

https://toit.io/pricing


I'd assume offline features would keep functioning, but online features would stop working. The online features seem to be device provisioning (so may not be able to change WiFi access points?), OTA updates, and cloud PubSub.


The advantage from using it a bit (not for anything real yet) is that it supports remote code/deploy/logging/communication. So it’s for the server/maintenance aspect that you are paying a monthly fee.


What are the advantages of creating a whole new language, as opposed to creating an implementation for a strict subset of an existing language? I skimmed through some tutorials and didn't find anything special about the syntax/semantics. The blog post doesn't explain it either.


I don't personally think that using a subset of an existing language is a pleasant solution for programmers. The language doesn't get a coherent design, and you don't know which parts of the language will be present.

We made lots of small decisions to make Toit useful for IoT. One of the things we optimized for was making the program code ROMable, so you can leave it in Flash. This has subtle effects on the language design but is vital on small devices where flash is often much larger and more power efficient than RAM.

We also just wanted to make a modern language with decisions that are natural in 2021, but weren't well known when Python was invented. Things like nullable types, or a language server that gives you nice autocompletion in VS Code. This also affects language design: collection.size is better than len(collection) because you can type collection.<tab> and the editor will help you.

https://twitter.com/toitlang/status/1395365234664161283

https://docs.toit.io/language


This:

>I don't personally think that using a subset of an existing language is a pleasant solution for programmers. The language doesn't get a coherent design, and you don't know which parts of the language will be present.

..sounds exactly like the kind of justification I would come up with if I got excited about designing a new system. Something that has happened many times :-)

(Note: I am just joking here, I have tremendous respect for you and your team mates)


I think it's accurate, though. Far more people know Python and Ruby than Lua, but when it comes to embedding an interpreter in an application, nobody chooses the language subsets of MicroPython and mruby over the complete language of Lua.


I also dislike language subsets. I feel that you run into the limitations of the subset real quick and figuring out the boundaries of the subset often feels like groping in the dark. Especially true for large languages like Python.

My most recent experience is with Numba, the python subset JIT, so it may be slightly different for micropython


What's the argument here? That it's difficult to learn to use a subset of a language because it is frustrating to locate the boundaries? Isn't it more frustrating to learn a completely unknown language?

Maybe what you're talking about is a combination of things. If you don't know what you can and can't do yet, then you're unlikely to recognize limitations. New languages also probably tend to have documentation that more completely specifies the language, versus subset languages where the implementation might be all there is to tell you what you can't do. In principle, a subset language is a new language that just happens to have familiar syntax and semantics.


In practice though, the familiar syntax and semantics often leads you to believe you can do things that you in fact cannot, until you face the "this feature is not supported" error or go read the documentation and search through the long list of unsupported features, then curse yourself and rewrite your code around the lack of said feature.

It's the gap between expectation and reality, coupled with the productivity hit of often using some feature, then having to rewrite without said feature, that leads me to dislike subset languages. It's often ultimately a different language with only syntactic similarity.


The advantage is $6/year/device. You need to raise some barriers so people don’t immediately switch.


Like AssemblyScript?


the answer is hidden in the name of the language ;-)


The docs page linked is, as expected, about language and syntax basics. Is there an example of it in use in a project, eg with interrupts, GPIO, bus access, DMA etc? There are a few examples on the Github, but they're terse.

What does performing an SPI etc operation look like? How will syntax for things like this map to MCU hardware? Eg built in to the language, or with HAL and peripheral access libs etc?

What makes it specifically targeted at IoT vice general embedded? RF support built into the standard lib? Security features, eg like nRF-53 and Cortex-M33?


As an example, here's the driver for small TFT screens of the type that is built into the M5Stack. This is an SPI-attached device: https://github.com/toitware/toit-color-tft/blob/main/src/col...

If you just have an M5Stack Core2 and want to play with it, then you don't need to write a new driver though. Probably you want the examples from this package: https://pkg.toit.io/package/github.com%2Ftoitware%2Ftoit-m5s...

(The implementation of that package in the src directory is an example of using an I2C peripheral.)


How many bytes of bytecode & any envelope does this file compile to? Together with the include'd deps vs. alone without them? How many bytes does the VM interpreter & runtime take?


Looks like it isn't statically typed and doesn't even support type hints. That seems like a fairly bit mistake to me and I'm surprised to see it from the authors of Dart who went through the whole Dart 1/2 "optional type hints" to "hmm it turns out static types are actually fairly critical".

Surely better to design the language with static types than to have to awkwardly tack them on later if it gets popular?

Documentation looks excellent though! Especially the language comparison.


Toit is optionally typed, and supports type annotations.

Type annotations for locals, fields, and globals are written with a trailing `/Type`. Return types are written with `-> ReturnType`.

See https://github.com/toitware/toit-lsm303dlhc/blob/main/src/ac... for a file I recently edited.

When a type annotation is written, the compiler enforces it. It uses it for static optimizations, and dynamically checks that the type is correct.

When a type can be null, it has to be suffixed by `?`.


Ah ok, I couldn't find anything about that in the docs but maybe I missed it. Interesting syntax choice when everyone else is going with `name: type`.


> dynamically checks that the type is correct

How thorough is this? Beartype is also on the front page right now, and its README contains an example of dynamic type checking in Python that takes over an hour to run [0].

Is Toit able to avoid being that slow?

[0] https://github.com/beartype/beartype#why-should-i-use-bearty...


Toit doesn't have generic types yet. This limitation means that the dynamic checks are very fast.

Since these checks also allow some optimizations, the cost of the dynamic checks is maybe 10-15%. And that's before doing a global type-inference, which should remove many of the checks.


side question, bit of a general question too... how do you make it fast with having optional type annotations?


We get a lot out of having static classes and methods that can't change dynamically. This allows us to use the selector-based row displacement technique for building a compact method dispatch table. (See https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29...)

Many small decisions help keeping the system fast. The memory layout, the bytecodes, the FFI interface, ...

Separating the compilation process from the running process obviously also helps. It gives us the option to do slower optimizations before-hand, and not pay for them at runtime. (Independent of the fact that the ESP32 wouldn't be able to do big optimizations anyway).

This is actually an area we haven't really spent too much time on yet: optimizations. The current compiler doesn't even do inlining yet. The speed of Toit was never really a problem, and we preferred to spend time elsewhere. Eventually, we will definitely do more there again.


AFAIK some of the authors of Toit were involved in the optionally-typed Dart 1, but the statically-typed Dart 2 was designed by other people.


It's always fun seeing VMs written in the SmallTalk/V8/Dart/... lineage because the implementations use similar structure, style, and naming. It's a pleasure to read.


Can you elaborate on what you mean?


Check out, for example, Dart's VM. Or https://github.com/facebookexperimental/skybison

Look at objects.h for example. Or the interpreters


I also found it immediately a pleasure to read so I was going to attempt to answer your question. However, I think "pleasure to read" in a language is very subjective and hard to break down.


how does it compare to tinygo ? (https://docs.toit.io/language/toitversus compares only vs 'not made for iot' languages which is not very relevant to do)


It's not Go ;)

In all seriousness, though: if you like Go, then tinygo could be a good solution.

Personally, I find coding in Toit just a much more pleasant experience.


Unpublished CLA is a no go. I assume the CLA says you can distribute my code under a proprietary license. Nope, nope, nope. Fix that, then I'll take a look.


Humor HN: Can I get a +1 for Austin Powers connection to title word?


I was assuming Brooklyn 99.


Previous attempt to create a language for networked embedded systems specifically: https://sing.stanford.edu/site/publications/pldi03gay.pdf


One previous related thread:

The Toit Programming Language - https://news.ycombinator.com/item?id=26365202 - March 2021 (18 comments)


This is awesome, congrats on the work.

Is the Toit language compilable to WebAssembly? I think it could be a great win for usage across different ecosystems!


I believe I once managed to make Toit run in web-assembly.

I know for sure that the compiler compiled to it, but I'm pretty sure I also had a "hello world" that worked.

I would need to find that branch and see if it still merges cleanly.


You were able to compile Toit code direct to WASM? I guess that required using static types throughout?

Or did you compile the VM to WASM instead, and interpret the Toit code as normal?


I compiled the VM to WASM and ran the bytecode in it.


That would be great, thanks! I think if Toit can jump into the Wasm train it could help the language even further (executing cleanly programs in the browser as well!)


Another Python-esq embedded language is Snek:

https://sneklang.org/


Does it have distribution and fault tolerance like Nerves on Elixir aside from having an LGPL vs. Apache (Nerves) license?

It may be fast, but how does it compare to Nerves for highly-distributed, swarm or fault-tolerant networks using many IoT devices for distributed control and information and gathering?


Nerves (or Erlang’s VM BEAM) is nowhere near small enough to run on ESP32. I think a comparison is not very useful.

If you have an ESP32 you need something like Toit, if you have a Raspberry Pi you might as well run embedded Linux (with Nerves on top).


I agree, but I saw the defense that Toit wasn't just for ESP32. I have been playing with ESP chips since first tinkering with them over 8 years ago. C is fine when you're that small, Lua or even ulisp [1], forth, etc. Why Toit? Bowery Farming uses Elixir Nerves for their network of small interconnected devices to run their vertical, indoor farming operations, and I think others are looking to shrink it.

[1] http://www.ulisp.com/


Interesting YAML-esque syntax. I've always liked the idea of automatic variables, since seeing them in the array languages like J or K.

I half wonder if the team felt any pressure to release the language now, ahead of Advent of Code...


Is Lars Bak no longer involved with Toit?

https://toit.io/company/about


He is still a minority shareholder according to the Danish Central Business Register:

https://datacvr.virk.dk/data/index.php?enhedstype=virksomhed...


A subset of Python with a flair of Smalltalk?


Can I run my own server and avoid the fees for updating devices?


Toit like a toiger!


https://www.youtube.com/watch?v=gu31VyXlTzo

(I will admit I thought the same thing, pleased to see it wasn't just me.)


Can the language compile to WASM?


[flagged]


To me those two sound quite different.

Here is, how we pronounce it: https://www.youtube.com/watch?v=JVKb8cm1B20


Also my first thought though, reminds me of https://www.youtube.com/watch?v=FboWtJiNYro ¯\_(ツ)_/¯


English spelling is a mess... spelling it as "Toyt" would avoid the ambiguity. Or if the "IT" is from "Internet of things"... you could go with "Adroit"... That's a real word and thus has a memorized pronunciation in English, even though ~that's in turn a "mispronounced" French word.

Sorry for bikeshedding! I love new programming languages, but even ~I'm going to be distracted if it's named after a female body part. For fun:

George Carlin: https://youtu.be/6ulTgP6fjfA?t=598


Its tɔɪt versus twŏt. Definetly not the same. Your bikeshedding is way over the line. It is just as much the french word for roof or a slangy spelling of tight. Don't make it their problem that you have an (weird) association.


I spent about a minute wondering why someone made yet another software project that I can't talk about with non-nerds (or in mixed company at all, or with people I don't know very, very well and do not work with, et c.), before figuring out other ways to say it (my first alternative guess was as in "a round tuit" = "to it", though note the spelling there...)

Blame French classes, I guess. And I'm American!

I bet if I passed this around on my non-nerd chat friend groups at least half would guess the vulgar-sounding version first.


Why do you assume English? The word is not in the English vocabulary, so the default pronunciation should be Latin, as it's written using Latin alphabet.


I am not sure the end sentence "let the game begin" entitle me to check this out


It is a language and a platform for Chinese ESP32. I wish it was in the article header. That would have saved me from reading the article.


I am pretty sure whatever you used to type this message in has a lot of Chinese parts.


The Toit language isn't bound to run on ESP32s.

We routinely run it on our desktops as well.


Is it possible to compile the code at https://github.com/toitlang/toit to run on a desktop? Are there instructions on how to make such a build?


You are linking to the page with the instructions on it.


Thanks - the page is very focused on the ESP32, and I missed the "Build for Linux" section in the middle.


This is an extraordinary step forward for our industry, and I’m quite certain we will look back five or perhaps 10 years from now at this moment and realize just how pivotal it was.


why? what's different?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: