Hacker News new | past | comments | ask | show | jobs | submit login
DeviceScript – TypeScript for Tiny IoT Devices (github.com/microsoft)
264 points by stunt on June 9, 2023 | hide | past | favorite | 157 comments



Previous discussed a couple of weeks ago: https://news.ycombinator.com/item?id=36059878 (101 comments)


Thanks! Macroexpanded:

DeviceScript: TypeScript for Tiny IoT Devices - https://news.ycombinator.com/item?id=36059878 - May 2023 (101 comments)


One of the biggest challenges will be drivers for the actual hardware - at the moment it looks like they have support for SPI and for built in LEDs. But it's a big challenge to expose all the different peripherals that come with MCUs in a consistent way.

Actually, looks like they've got quite a lot of stuff done already:

https://microsoft.github.io/devicescript/api/clients https://microsoft.github.io/devicescript/api/servers


With MicroPython you can have C drivers and just do the MicroPython bindings. I do not see the benefits of having yet another interpreted IoT language and it does not make sense to do the drivers in the interpreted language itself.


I think having a strongly typed language is a great reason to have another interpreted language for IoT/embedded.


Typescript is more of "firmly" typed than strongly, in my not so humble opinion.

I guess it's beats C for IoT, but with Typescript it still feels like the S in IoT stands for Security.


The consequences of a type unsoundness in TypeScript are typically much more benign (an exception typically) than in C (a buffer overflow, arbitrary code execution, etc.). Also the application logic is more visible in TypeScript just due to it being higher-level - there isn't so much low-level detail (ever done JSON in C?) so less chance for a mistake.


They should integrate with Zephyr OS, as they have a good generic sensor API, with tons of device drivers and platform support.


I wouldn't say zephyr has "tons" of drivers, except maybe in comparison to other RTOSes. It has a few drivers for each of the device types you might use, but you can't just buy random hardware without checking support the way you can with Linux. There's enough that you have examples for implementing your own drivers pretty easily though.


I agree with your driver assessment as someone who's used Zephyr quite a bit. If you're working with a new sensor, writing a new one is super straightforward. I don't have much experience picking sensor devices for embedded Linux, but I didn't think they had a ton of drivers for those types of devices. Bluetooth, networking, and SoCs, sure, but not so much for I2C/SPI sensors and displays and whatnot.


Tons might be stretching it. But to my knowledge there are not many alternatives for microcontrollers that have more? Hopefully one day it will be like Linux. At least they are trying to tackle the driver problem, unlike other most other RTOSes.


Is the only reason for this to make it possible for web developers or people who know TypeScript to write code for IoT Devices? To fill the lack of experienced low level programmers? Because as language alone, I don't see a reason why I should I ever use this if I am not familiar with TypeScript already.


There are a number of reasons to do this. This sort of setup typically has the VM runtime flashed into the microprocessor's program space with the interpreted byte code stored in data space --- either internal or external.

1) It is obviously not as fast as native but it is fast enough for a lot of applications. In embedded work, you don't get extra credit for being faster than necessary. Speed critical functions like communications are mostly handled at native speed by the VM.

2) Code size. Interpreted byte code (minus the VM) can be smaller and less redundant than native. And by adding cheap external storage, code can easily expand beyond the native program space of the micro.

3) Easy remote updates. Byte code can be received and stored without making changes to the micro's program code (no re-flashing required). It's possible to fix bugs and make changes or even completely repurpose the hardware from afar.


>In embedded work, you don't get extra credit for being faster than necessary.

For battery powered devices, you absolutely do. When you're talking over a slower protocol, being able to put the processor to sleep for 95% of the time can translate to pretty massive power savings.


> In embedded work, you don't get extra credit for being faster than necessary.

You absolutely do when you can cut power requirements and get by with cheaper CPU/hardware. I ran a whole consulting business redesigning poorly designed devices and redoing firmware for cost reduction. How does one decide “necessary”, what is necessary in the short and long term are often not the same.


3 is a big one, I think. Using an interpreted language speeds up the development cycle to prototype, change, release, and iterate. And for some purposes it's fast and small enough, that the trade-off is worth it.


This makes a lot of sense. However, it makes me wonder how big is the new attack surface for remote upgrades/updates.

You need to implement a safe updater (with remote protocols) on VM level. And I guess you can never upgrade the VM itself, or if you can, it adds some extra complexity, or physical access.

There also need to be some kind of signature validation for every release, which means that device needs to perform some cryptographic operations and store at least tamper-proof public keys.


I can't really see how this is different from a native-code based device, especially one which is actually following good practice by not trusting what's in flash. Every stage of the boot chain still has to validate the next - there's just one more layer on top where the application is the runtime VM and it has to validate sub-applications / managed code.


I’m not sure if you get the controllers part of embedded microcontrollers. They’re not for time-independent deep thoughts, they are for controlling devices and peripherals in real time.

1) you get shorter battery life for slower code. Also, everything is speed critical anyways. It sucks when I’m controlling my toy robot and it’s trying to drive into the floor while doMyStuff(); has its head stuck in float math.

2) external anything is added cost to everything; adding an SPI Flash for code to otherwise done board costs, I don’t know, $2 for the chip, $0.02 for capacitors and resistors, 2-3 days to redo the board, software… can I just make it a want item for v2.0?

3) why do I have to do it wireless? Can I just snatch a prototype from test batch, wire it to debugger and leave it on a desk? Do I really have to manage these keys, and why are you going to be signing it off for release if it’s not working?

Embedded devices are not like a Nintendo Switch, it’s like Switch Joy-Cons, or even buttons on Joy-Con sometimes. They are not like nail-top autonomous Pi calculation device. Admittedly, Nintendo update Joy-Con firmware sometimes, but very rarely, and they don’t make Switch playing kids wait for X button input to finish processing. The buttons are read, sent out, and received. It just makes no sense that adding drag to real-time computing this way would help a lot.


>"In embedded work, you don't get extra credit for being faster than necessary"

You do get credit for using the cheapest and lowest cost MCU for the task which directly depends on performance of the code. In case of battery operated devices it is even more important.


The debuggability is also far better I expect, as a person who has spent hours tracing some crash deep in LwIP because of a bad flag or wrong interrupt priority.


Yes.

Assuming you're working with a quality VM and drivers, development speed is also improved. A lot of low level details have already been worked out in the VM thus freeing the programmer to work at a higher level.


LwIP (especially with Simcom’s SIM7000 via PPPoS underneath it) gives me PTSD. Even with decent JTAG debugging, it’s such a pain.


I think it's especially scary when most devkits come with it pre-integrated by the manufacturer, but the quality of that integration varied widely. My experience is nearly a decade out of date at this point, but when I was last going Cortex M3 stuff, I found that the integration of TI/Stellaris was excellent, and the integration on STM32 was one landmine after another— and the same held true for the USB stacks, even for stuff that should have been dirt simple like just emulating a serial port.


Yeah the STM32 dev kit (CubeMX etc) is all kinds of horrifying. ESP-IDF is better in some ways, worse in others. Its LwIP integration (esp_netif) is okay but has some interesting bugs you can hit. Like creating sockets leaking memory even after the socket is closed! Yay!


3) isn't really practical without some storage mechanism though. Sure, you can make a change that sits in ram until the next power cycle, but you could do that with firmware too if you plan for it. Whether you store the raw data in executable flash or in some external eeprom doesn't really change the workflow much.


>Is the only reason for this to make it possible for web developers or people who know TypeScript to write code for IoT Devices? To fill the lack of experienced low level programmers?

Which is funny because there's no lack of low level programmers in my experience.

Companies just try and pay Embedded Systems Engineers and Electrical Engineers dirt compared to "Software Engineers". In the same part of NY where SWE base salaries are $160k+, they will offer $120k for the same years of experience to a EE/Embedded SWE. And these are both major companies (non-big tech) and small companies that will do this.

Of course I also see those job postings for those specific companies last a long time here. There's one going for 3 years now for a many decades old small defense contractor that's even had their CTO directly reach out to me. Meanwhile I sit pretty in my actually respectable paying embedded job that I ain't jumping ship for a paycut.


I wonder if many of these lowball salary job postings could just be companies applying for an employee's greencard through the PERM system. IIUC, for PERM, the applicant's employer is required to put out a job posting for at least 10 days to show nobody else can do the job their employee/greencard applicant does. The salary they list in the posting must be at least the minimum wage dictated by the Department of Labor for the role.

However, I suspect that the company making the ad will just list the lowest possible salary in the posting to deter real applicants from applying, hence making the greencard applications smoother.

However, don't quote me on this, since this is just my very vague knowledge on how greencard applications work. Somebody else here who knows more about this topic, please chime in to let me know if this is true.


In my experience, a firmware engineer contractor is much more expensive than a web contractor. But that's probably just a contractor supply/demand thing vs full-time.


The same could be said for MicroPython or CircuiyPython.

I suspect the target audience is more on the hobbyist/non embedded programmer side of things.


There are some differences, but broadly correct (eg., the program is precompiled and the experience is definitely the best in VSCode; plus of course different language).

However, I heard of uPython being used in production deployments, though maybe not in millions of units.

(I'm working on DeviceScript)


The thing is, if you want to poke at a device interactively, your options are either Forth or (if the system is beefy enough) one of these scripting language ports; tethered C interpreters are not precisely commonplace. And while I love Forth to bits, most people will probably go the scripting-language route.


A whole 8 bit computing generation was able to jump into computing thanks to scripting.


I started my career in C/C++ and embedded. Every time I go back to work on embedded stuff for fun, it's like going back in time and losing all of the nice things that come with languages like TypeScript or JS. Suddenly things like associative arrays, maps, sets, lists, vectors - all require much more mental overhead to deal with and ultimately get in the way of what I actually want to be doing.


TypeScript development on VScode is excellent (I'm told) so that would be a reason for this.

Excellent tooling exists for you if your language is TypeScript, so maybe try putting TypeScript in more places.


It most certainly is. I've tried a lot of development solutions over the years, and always find myself coming back to VSCode. The TypeScript support is amazing.


it's not C, so that's a great step towards making programming IoT more accessible.


Well that's valid as long as you don't get exception in underlying C driver. Now you are solving two possible problems - error in your script or error in the driver. Good luck debugging that without proper debugging probe.


That's been the argument against higher level languages since punch cards.

Sure, there may be problems, yet somehow the internet runs.

Coding in asm is fun, and a good skill to have for when you need it - but most of the time, you don't.


The problem is not higher level language. The problem is that you are essentially running dual-stack firmware with twice as complexity and twice as problems.

And C vs assembly comparison is irrelevant. You are running assembly represented by C, but not interpreting C in assembly written VM.


Then I guess you'd better avoid any sort of embedded libc implementation too.

Or fadecandy, or any external library.

I've been through hell in embedded systems land debugging chipmaker-supplied C compiler problems.

Should I just write it all in asm then? I've certainly made that choice for some projects.

The point is, at some point you just want to get things done and you have to trust your tool chain.


You are completely missing my point. When I have C code and compile it, I will have opcodes which processor can run. All my bugs are going to happen only in C code and I can debug those using one debugging probe.

When I have embedded scripting, then I have C code which represents my VM and I have also scripts. Then I am hunting for a bug in C code and script itself. I need two debugging stacks to get the problem solved. Sure you can say that I can run scripts on my PC, but what if everything works on my PC, but the moment I will load it on the target, script will crash? Then without proper debugger I am screwed.


DeviceScript comes with a source-level debugger for the VM code.

Indeed, if you add C code and that crashes it maybe harder to debug than a pure C system, however it wasn't my experience while developing DeviceScript - it was either clearly a bug in C that didn't really depend much on the bytecode, or an exception in DeviceScript which you debug just like in TypeScript.

We also support Jacdac [0] which makes it possible to put the C code on a different MCU, isolating the problem and adding traceability. (You can also run Jacdac server on the same MCU)

[0] https://microsoft.github.io/jacdac-docs/


I appreciate your engagement throughout this post, including how you are gently educating the naysayers.


[flagged]


I on the other hand started my "programming life" with C++ (because I was interested in a project written in C++) then I went to C# semi-professionally, if I was doing web it'd be with TS because of the tooling, also types help me organize my brain, and the escape hatch into YOLO land is very low in TS considering it's compiled into JS mostly, so the typesystem allows for "any" and such while still restricting things that aren't prototypes anymore.

Both C# and TS come "from the same author", so that might influence me a bit.


TypeScript/Javascript are the modern PHP or Java of the world. It's something that people get pushed into it from school and their "first intro to programming", they often succeed through the meme of the language. But there's a subset that will struggle to actually be a "computer scientist" and pivot between languages and technologies in the long term.


My personal opinion is that good typescript to C transpiller would do a way better job, tons of microcontrollers supported out of the box. I would be also happy to use it for desktop. With some tricks it could even support references and additional numeric data types (u8, i8 ... ) without breaking any syntax


Pure Typescript supporting all of the Ecmascript spec would have to output a lot of C to get the same results.


It obviously would have to be a limited subset, like devicescript is also "just" a subset of typescript.

I think the limitations for devicescript would probably also work for outputting reasonable amounts of C.


I think a language that just transpiled to the equivalent C would be pretty awesome. I know Google is building Carbon, but it is more focused on C++. They pitch it as Typescript to JavaScript, Scala to Java, Carbon to C++.

Instead of Rust or Zig trying to replace C++ or Java, is seems better to just integrate with it without linking through some FFI.

I'm working on some C code for some microcontroller since it was too difficult to use Rust.


I use Nim on embedded precisely for that reason: https://github.com/elcritch/nesper

I wtapped much of zephyr as well but that ones less used: https://github.com/embeddednim/nephyr


I could be wrong, but doesn't vlang and nim do this?


In the case of V, it's already there to do much of what is being wished for. V (an easier higher level C family language) can be transpiled to readable C. Even more, V can transpile C to V (C2V) and can be used like a scripting language, even though it's a compiled language.


Has vlang managed to get past its controversies and actually deliver on its promises? Last I heard it was still promising too much, under delivering and not handling the feedback too well.


>Last I heard it was still promising too much, under delivering and not handling the feedback too well.

This is an odd statement, as if pushing along an artificially generated negative narrative. It comes across as not using or having never used what one is talking about. Therefore using hearsay and rumor ("last I heard"), which easily can come from competitors and trolls, as the basis of information versus facts.

Constructive feedback would be going to their GitHub and making suggestions, filing useful bug reports, or helping to implement some feature (for those who are really as technically skilled as they claim elsewhere).

> Has vlang managed to get past its controversies and actually deliver on its promises?

The V project is constantly delivering. This can be seen by their near weekly updates, projects on VPM, projects on Awesome V, etc...

The so-called "controversies", have much to do with competitors and trolls, as with anything else.


To be clear: Vlang overpromised and underdelivered on many features. It also included features by shelling out to other executables like curl.

It might have all been fixed by now, but it is a fact that Vlang has had listed many, many, features on its homepage (without any indication that they were work in progress) that had no implementation and that had no proper prototype.

The defensiveness of Vlang supporters is not a good look.


> ...overpromised and underdelivered on many features...

We could argue that all programming languages that aren't 1.0, have not delivered yet. So, with that same energy, it will be interesting to go chase around supporters of Jai, Mojo, Zig, Odin, etc... with the same rhetoric.

Vlang is "delivering" to its users, as evident by its near weekly updates, VPM, or Awesome V site. It's an open-source project and language. Developers are free to join the project to ensure "delivery of the product".

Else if they are not using said product, they don't have to worry about it. It would be bizarre for any person to be so obsessed with something one doesn't want to use, unless maybe competitors who are afraid of that language.

> ...shelling out to other executables like curl.

Vlang doesn't use curl in its modules. Everyone is free to check its source code. It has it's own such functionality, written in V. So that's looks like misinformation.

> ...defensiveness of Vlang supporters...

Didn't know about this new rule, where if one posts a response 16 days later, the other isn't allowed to make a counter point.

By the way, don't represent "all" Vlang supporters. Just giving my personal opinions, here on HN, like others are allowed to and for the languages they like.

> It might have all been fixed by now...

If one doesn't know about the thing they are talking down on ("It might"), then that looks like something very odd is going on. Doesn't make sense to worry and talk about something that one doesn't use, except again a competitor or just out to troll.

> features on its homepage (without any indication that they were work in progress) that had no implementation and that had no proper prototype...

Maybe there is confusion about what year or what time period being talked about. This is the year 2023. Perhaps what is being referred to is 2019, the first day(s) of the language being released, or mistakes on its website. Here's the thing though, its a free open-source language. Pretty sure such mistakes have happened on websites and with other languages before (especially just released ones).

Not seeing people on HN chasing down supporters of Jai, Mojo, Zig, Odin, etc... about stuff on their website from 4 and 5 years ago that nobody cares about. Furthermore, people freely choose to use, support, or donate. Not understanding being upset over what languages other people like to use, well unless...


I have never used it. I just remember reading on the README that is compiles to human readable C.


I that case it might be useful to check previous threads here since vlang has been quite controversial. Or check what they have implemented versus promised.


>...it might be useful to check previous threads here since vlang has been quite controversial.

The majority of controversy has been generated by competitors and detractors using various social media platforms to spread misinformation. To include identified troll accounts created specifically for the purpose of launching such attacks. Something like, "throwing stones and hiding their hands".

It would arguably be better to try the language out for oneself, and form one's own opinion, versus allowing known competitors and evangelists who are purposely spreading misinformation to shape one's mind. Just like it is common sense to not expect a used car dealer trying to make sales to give fair and honest opinions or assessments about competitor's cars.

> Or check what they have implemented versus promised.

Yes, totally agree that the best way is to check something out, and form one's own opinion. Also, these are free open-source languages that we are referring to. People are free to go to their GitHub and make suggestions, discuss, or join the effort to help implement whatever they feel is needed.


It delivers on everything listed on the website/docs. You can check for yourself.


I'm not an expert, but wouldn't garbage collection be a difficult problem as well?


It would have to include its own garbage collector, yes.


Does such a thing exist? I would love something like that, but is it even feasible? Isn’t there a lot more you need to be aware of, to make a translation of say TS’s objects into C?


These are far from perfect, but still something:

https://github.com/andrei-markeev/ts2c/

https://github.com/evanw/thinscript

If you aim for 32 bit microcontrollers then you can go with assemblyscript to wasm and then with wasm to C transpiller


They would likely have to severely restrict the range of supported types, for example `window` is likely impossible to compile meaningfully.


I really want to be able to compile typescript to not javascript. Ideally to native but if it’s bytecode or whatever that’s fine I don’t care.

It’s a nightmare trying to deal with bundling and compiling and targets and different import schemes ugh.

I wish I could compile my programs it compiles all the stuff in node modules and gives me some working code.

Desperate to get away from nodejs I tried deno and bun …. neither of them are anything close to a drop in replacement for nodejs.


So you want structural typing, but for native?

Funny, the fact that typescript is "only" structurally typed is one of my main pain points with the language. (Of course it's tons better than vanilla JS)


What exactly do you miss? JS has built-in reflection, eg `typeof instance`, `instance.constructor.name`. Even multiple dispatch can be hacked together if you really need it, eg there is a library @arrows/multimethod.


Those don't change that the type system is structural; the type system is actually not fully safe if you use the things you mention, eg:

  class Dog { woof(){} }
  class Cat { meow(){} }
  function f(a: Dog|Cat) {
    if (a instanceof Dog) { 
      a.woof()
    } else {
      a.meow()
    }
  }

  let dogish = {woof: ()=>{}}
  f(dogish)

This compiles because dogish is structurally a dog, the type system allows instanceof to narrow the type but "dogish instanceof Dog" is actually false, so at runtime this will crash after trying to call meow on dogish.


Yeah don't do that. :)

Do this:

  interface Dog { typeName: "Dog"; woof():void }
  interface Cat { typeName: "Cat"; meow():void }
  
  function isDog(a:Dog|Cat) : a is Dog {
    return a.typeName == "Dog"; // Some casting may be required here
  }
  
  function f(a: Dog|Cat) {
    if (isDog(a)) a.woof();
    else a.meow();
  }
  
  let dogish : Dog = {typeName:"Dog", woof: ()=>{ console.out("Woof this is Dog")}};
  f(dogish);
The neat thing about TypeScript's type system is that it's structural but you can pretty easily implement nominal types on top of it.

(And if you only need compile-time checks, you can make typeName nullable; {typeName?:"Dog"} != {typeName?:"Cat"})


Sure, but opt-in nominal types that you describe is still something of a foot-gun that you have to be wary of since its easy to accidentally pass something that conforms to the structural type but violates whatever expectation. Like here's an example:

  type SpecificJsonBlob = {x: number};
  function serialize(s: SpecificJsonBlob) { return JSON.stringify(s); }
  function addOneToX(s: SpecificJsonBlob) { s.x += 1 }
  [...]
  let o = { get x() { return 1 } }
  serialize(o)
  addOneToX(o)
This compiles because the getter makes it structurally conform to the type, but the 'serialize' will surprisingly return just '{}' since JSON.stringify doesn't care about getters, and addOneToX(o) is actually just a runtime crash since it doesn't implement set x. These are runtime issues that would be precluded by the type system in a normal structural typed language.

There's obviously other cases of ergonomic benefit to structural typing (including that idiomatic JS duck-typing patterns work as you'd hope), but I think its not unreasonable for someone to feel that it's their biggest problem with typescript (as grandparent OP did).


Idk, I think it's extremely statistically rare coincidence for nominative typing to be a better default than structural. In other words, it's much more useful (by default) to have some function work on more types than be a little bit more type-safe for the exactly the type author had in mind when writing it. In my opinion, type-safety has diminishing returns and the most of its usefulness lies in trivial things, like passing a string instead of some object or array just as a typo, or writing a wrong field name when mapping one object to another, but nominative typing lies way beyond my imaginary line marking the zone when types become more of annoyance than help.


This is pretty much what I ended up with (with a string const generic type param actually) but now I have this extra implementation detail that I need to add/remove at every service boundary. It just makes every serialization/deserialization more complicated and generally adds cruft.


The fact that it compiles is due to the TypeScript’s goal of being pragmatic rather than sound. If an instance isn’t derived from the class Dog, it doesn’t mean that it doesn’t conform to the type/interface Dog, yet TypeScript assumes it means that for the instance of check. It has more to do with TypeScript being a bolt-on over JavaScript that has to take into account existing codebases rather than with TypeScript being structurally typed.


My classic example case is actually even simpler IMHO:

    type userId = string;
    type subscriptionId = string;
    
    const uid: userId = 'userA';
    const sid: subscriptionId = uid; // compiler is OK with this


Isn't that just how type aliases work rather than anything to do with structural typing? That example also compiles in Haskell and Go and Kotlin.

Here's how type aliases are usually documented: "Type aliases do not introduce new types. They are equivalent to the corresponding underlying types."

The person above did have a better example of the downside: You usually want something to nominally comply with a specific interface, not structurally. i.e. Just because something happens to have an `encrypt(u8[]): u8[]` method doesn't mean you want to accept it as an argument. (Go interfaces have a similar issue.)


Those are just type aliases. They're just different names for `string`.


What about AssemblyScript?


> == and != operators (other than == null/undefined); please use === and !===

That's a whole lot of equals signs.

Typos aside, this all looks really amazing!

Although by no means sound, the ergonomics of TypeScript's type system are phenomenal. Really glad to see additional use cases beyond traditional JS runtimes.


Typescript has to inherit this typing madness because it’s strictly a Javascript superset. As a decade long JS developer, I avoid == and != like plague because of type castings and funny results that I don’t bother to remember.



!=== is a typo and should probably be !==



Main thing everyone overlooks: If you can run the same program in C on a MCU for 5 cents less, they are absolutely gonna go with that.

Cost cutting in high volume electronics is crazy.


There are plenty of usecases for electronics which is not high volume. And many cases where the BOM margin are not the cost driver, such as when installation/setup costs dominate. And projects where cost is secondary to thing like time-to-market, ability to integrate/customize et.c.


Is really hard for me to understand how running an VM on a resource constrained device has any benefit. There is a reason why those devices run using very lightweight "OS"s like FreeRTOS and Embedded C.

Why the constant obsession to apply a technology designed for a specific purpose everywhere else, even when it doesn't make sense?


Say that to the millions of ID cards, transport cards, SIM cards, and other smart cards with a secure element that run Java (a lot of times only powered by the small current from the NFC tap).


Java was originally intended for embedded devices...


It's not a bad decision for scripting provided the VM is lightweight enough. Things like "FORTH interpreter" or the old "BASIC stamp" microcontrollers. And it provides a degree of robustness vs running arbitrary binaries.


The Apollo program went to the moon with a complex VM on top of an extremely limited physical architecture. That's actually one of the main reasons to do it, because you can implement a strictly better (in all sorts of ways) virtual machine on top of really awful hardware.

Not to say that's valid in this instance, but plenty of early VMs were entirely made to improve resource constrained hardware


Making it easy for hobbyists who already know that technology to have access. Micropython has been successful, and this is an alternative to that.


The github project indicates "DeviceScript brings a professional TypeScript developer experience to low-resource microcontroller-based devices."

If you tell me is a toy, and somebody's pet project: fine. Is all about having fun.

But then don't mention "professional" in the project description.


It's not an experience for professional typescript development, but the experience that professional typescript developers are used to.

This isn't in competition with C or C++, it's in competition with micropython. Python isn't a great language except in its ubiquity. I'd rather work in what I know, which is TS. This opens up microcontroller development to JavaScript devs of whom there are a lot of us.

Types are really helpful when dealing with unfamiliar APIs. When doing embedded projects, you deal with a lot of APIs. There are the language built ins, any libraries you are using to interface with peripherals. This project opens up microcontroller development in a big way to a LOT of developers.

Is it what you want to use for a commercial embedded device? I can't say. Is that the only standard? Then you should just be using C for everything, I guess. But something like a Raspberry Pi Pico or ESP32 has plenty of resources to run JavaScript while still being able to manage a weather station or automated garden or security camera or pen plotter. There are lots of applications that don't use the full power of the board.


Hmm?

Instead of ctrl+f for "professional" I suggest re-reading it

>professional TypeScript developer experience

It is about experience of sane programming environment, right?


Easy: because TypeScript or Python are way easier to learn than C. Learning C is a long, arduous, uphill battle against arcane error messages and undefined behaviour.

Unless you have a background in C/C++ already, most people can probably get up and running with something like this way, way faster.


Good luck understanding things like `if(!!!c) { ... }` or why a line-break after a return statement matters in JavaScript/TypeScipt ;) JS has its own footguns and legacy baggage.


I've never seen `!!!` in JavaScript, and I do a lot of it. Care to share?


Shouldn't have made an example in the if-statement as it is mostly useless there. But triple ! is very common to negate-and-convert a possibly falsy statement (undefined, null, false/true):

const x: boolean | undefined | null = getValue(); const not_x: boolean = !!!y

I added TS type annotation for clarity, although could be inferred if `getValue` is typed accordingly.


I've seen `!!` and I've seen `!`, but what would `!!!` get you here that the other two don't?


negation plus cast to boolean. See this for more info: https://stackoverflow.com/questions/21154510/the-use-of-the-...


And line breaks after return statement? Is that true? Haven't stumbled on that one. So this is probably misinformation.


No, that one is true. JS automatic semicolon insertion is dumb.

   return
     <p>
       A JSX paragraph
     </p>
is a common mistake from novices; it'll return void/undefined.

In TypeScript, at least, you get yelled at for this.


How do you get that TypeScript or Python environment on the chip of your interest at the first place? How do you expose hardware interfaces without knowledge of C?


> How do you get that TypeScript or Python environment on the chip of your interest at the first place?

By having somebody else do it. Abstraction is a wonderful thing.


That's just kicking the can down the road. What if you are working on device which is under NDA? What if it is some exotic MCU which nobody else uses?


Then you probably shouldn’t use this. It’s not for you, that’s cool, move on and use whatever you’re currently using.


I am just showing you that DeviceScript/MicroPython/LUA/any other scripting language will expect from the user to know lot of C in order to be able to use its board unless they want to just run it without any input/output of data. But users want to use the scripting language because they don't know C. The whole flow is Catch-22.


I might have agreed 10 to 15 years ago when arduino was brand new and almost everything was custom.

These days... eh - pretty hard disagree with everything you've said.

Do some folks still need to know the ins & outs of the device? Sure. Will this work on every device? Nope.

Does that matter for the success of this project? Not a fucking bit.

Honestly - this looks a lot like Electron in my opinion: It gives companies a very cheap entry point into a whole realm of tooling that was previously out of bounds.

They can do it without having to hire new folks, they can prototype and run with it as far as they'd like, and then 3 years in, once the product is real and they know they have a market - they can turn around and pay someone to optimize the embedded devices for cost/power/performance/other.

The flow isn't catch-22 AT ALL. The flow is: I'm trying to do a thing that's only marginally related to the embedded device, and it's nifty that I can do that with low entry costs (both financial and knowledge).

---

By the time you are under NDA for a new device... you are established enough to be making your own decisions around tooling (basically - you are part of phase 2: optimize).


> The flow is: I'm trying to do a thing that's only marginally related to the embedded device

It's too bad that this comment is buried so deep: it should be at top level. More and more often, embedded work is just like this -- the business logic is far more important than the fact that it's running on an "embedded device." And in those cases, having programmers who understand modern software development at a high level is far more useful than having programmers who are expert in C and comfortable sitting down with multiple chip datasheets for a week, writing peripheral drivers.


One of they main reasons was that they had to: the cost of a more capable system was too high. In the last years that has improved drastically, and there are many usecases where the 5 USD increase in BOM needed to run JS/Python etc can be justified.


Exactly! But it's more like 1.50 USD (ESP32-C3 or RP2040 compared to say STM8).


8 and 16 bit home computing says hello.


I agree, this will mostly go nowhere. Sure when somebody will prepare you DeviceScript environment for *your* board, then you are good to go. But in 99% of cases, you will get hardware in front of you which almost certainly is not supported by DeviceScript. And now without intimate knowledge of C, how are you going to expose interfaces of that particular hardware, so you can work with those interfaces in DeviceScript? Well you won't, you need to know C first.

Same problem for MicroPython. Same problem for LUA, same problem for any scripting language running on constrained MCU.


The target audience for such runtimes are teams with general software engineer skills, and less embedded skills, and little hardware skills. They are likely to weight software support (including drivers) very heavily when selecting hardware. This reduces how often the scenario you describe will come up, compared to traditional hardware development.


Yep.

For example, it supports ESP32.

Every problem sure starts to look like a ESP32 nail if I have this tool chain available.


Depending on the kind of work you do, this may not be a problem.

For my day job, I use STM32/C++ because it's what the company has standardized on. For my side gig/consulting work, I've pretty much standardized on ESP32 because it's cheap, has lots of resources and good community, and I can leverage the Arduino ecosystem. It's grossly overkill for a lot of projects but no-one cares. Clients just care that you can ship fast.

My next step is moving the side gig work to MicroPython or some other higher-level language that lets me code much faster than C/C++.


Agree 100%

That was my point - the ESP32 is so versatile and cheap, it's my go-to these days for pretty much everything.

Being about to have an easy and reasonably powerful js runtime for that sounds great.

Apparently node-red also has something for esp32?

And I haven't tried it, but low.js looks cool too [1]

[1] https://github.com/neonious/lowjs


A VM can make all the gaping security holes portable between IOT devices.


I appreciate the tongue-in-cheek, but I think there‘s really the chance for better IoT security when using a VM. Those things are connected to the internet (duh) and sandboxing is probably a good idea. You obviously don’t need a VM for that, but maybe the tradeoffs are favorable.


99% of the security issues in the IoT things are software design stupidity. Using a "safe" language or "sandboxed" VM cannot save your lightbulb when it's main loop includes "fill buffer with content from HTTP endpoint and execute it"


This looks really promising and that's coming from someone who used to hand roll assembly for my hardware projects in uni. Since my day to day focus is elsewhere these days I need a setup that will hold my hand and hide away as many footguns as possible. I'm already familiar with Typescript so using the same language in a different environment is a win. Having that system be backed by a large company is a great boon as well, as it means there's a somewhat smaller chance of it disappearing from underneath me (Google's projects notwithstanding).


I’ve been mixed about using alternative languages or specialized runtimes on IoT devices and while I think an inherently event-driven language like JavaScript, with its first-class function as value support and widely used patterns, I’ve been of the mind lately that building firmware with these tools may be the wrong approach.

First, most IoT device behavior can be described with a finite number of actions, and usually revolves around reading and writing bits to a bus (I2C/TWI, SPI, USART, CAN) or GPIO. Hardware is only really configured once and ran forever.

I think there is a place for a hardware system that self-describes to an entity and receives a bytecode procedure for each phase of its operation. This byte code can be made small because there are not many operations to really be done and the firmware would just directly execute it and handle updates, versions, and device abstractions.


I don't think there's anything inherently event-based in JavaScript the language, it's just how the browser API works. You can just as easily write busy loops.


Sure, but that’s not the point here. With that logic, nothing is truly event-based if you can conceivably use it other ways. JavaScript was designed to interface with the DOM and had to support certain flows which is why I classify it inherently event based even if that does not meet technical requirements to call it so.


Just out of curiosity, does anyone know such an iplementation, which fully follows the standard?


Sounds a bit similar to assemblyscript. I wonder if there's a connection between those projects. Assembly script targeting WASM on small devices might be useful.


We need to add a doc page about relationship with WASM...

In short, WASM was not designed to be interpreted, and definitely not on small devices. The DeviceScript VM bytecode is designed to run directly from flash (read-only memory of which you have typically 10x more on an embedded system), with minimal overheads.

Also WASM is not designed as a runtime for a dynamic language, eg., + operator would be for two i32 and what you really want for JavaScript semantics is to have a + operator that works on strings, NaN-boxed numbers, etc.

As for AssemblyScript, I guess it's meant as language for small fragments of your code, not the whole application. For application you would be probably better off with Rust or similar and native compilation.



I was thinking the same thing, just like TinyGo can target both microcontrollers and WASM.


There's a lot more information on the marketing site:

* TypeScript for IoT: The familiar syntax and tooling, all at your fingertips.

* Small Runtime: Bytecode interpreter for low power/flash/memory.

* Hardware as Services: Client/server architecture for sensors and actuators.

* Debugging: In Visual Studio Code, for embedded hardware or simulated devices.

* Simulation and Testing: Develop and test your firmware using hardware/mock sensors. CI friendly.

* Development Gateway: Prototype cloud service with device management, firmware deployment and message queues.

https://microsoft.github.io/devicescript/


Microsoft is really doing amazing with FOSS lately. The only things I see missing are some kind of filesystem. It doesn't seem quite ready to replace CircuitPython but probably will in time.

The big potential I see here is App-capable devices. That's really the missing factor with the IoT right now, the apps are separate from the device. We have to adapt everything around it to be able to talk to it, and usually you can't because it's all proprietary.

But if we had an OS and an App store, any device would work with anything.

We could actually get some good use out of Matter being IP based, if we could run apps on our smart plug.

It would be especially great for things that have a display.

I'm not sure why nobody has made an IoT OS with an app store yet, but it would/will be awesome.


> The main difference in semantics between DeviceScript and JavaScript is that DeviceScript programs run in multiple fibers (threads). In practice, this behaves like JS with async/await but without an ability to manipulate promises directly (that is fibers can only interleave at await points and at most one fiber runs at any given time).


The language seems rather complete [L], which makes me wonder: would that byte interpreter be of any use in other environments? How fast is it compared to e.g. V8 and JavaScriptCore? And to µPython?

[L] https://microsoft.github.io/devicescript/language


It should about the same as uPython. There is a few more tricks we can play due to whole program compilation but I don't think we take advantage of that yet.



No ESP32-S3?

This looks great but needs a non-frictiony way to bolt together with some C or assembly code where needed. Not sure about JADAC, and wondering how hard it would be to write libraries / servers / whatever for sensors, DMA, ADC, etc.

Also note docs say "Serial, SPI, PWM are not supported yet at the DeviceScript level."


SPI is now supported (though only on ESP32 right now; PRs are welcome elswhere!), just fixed the docs. DMA is typically bundled with some other driver (like SPI).

We do support ADC - there is even a funky way of exposing these nicely as Jacdac servers so they show on debugging dashboards [0]

As for S3, PRs are welcome! :)

[0] https://microsoft.github.io/devicescript/developer/servers/a...


It’s odd that the S3 isn’t supported when the S2 is the same LX7 based core and as far as ESP-IDF bring up is concerned it’s pretty much just some io_mux_reg and other definitions (at least when I ported Nesper to the S3 from the original ESP32 project)

Though it also seems like it doesn’t use both of the LX7 cores:

> and at most one fiber runs at any given time


This might be suitable for dumb IoT devices that are connected to a power adapter, but as soon as you begin working with battery powered products, time critical systems, or require to reduce BoM costs, this sort of thing goes out the window.


Finally, I've been waiting for this for years now!

Now it's just a matter of continuing to expand the integrations. Will have to look at what the process looks like for creating custom integrations for existing libraries when non exist.


Why the wait?

Microsoft's MakeCode project already provides something similar, via compilation to C++, for quite some time now.


We're the same people who founded MakeCode. DeviceScript has a very different target audience (professional TypeScript developers vs students), is easier to port to different architectures, is more tied into VSCode, and is more compatible with JavaScript semantics (though slower).

Also MakeCode compiles to ARM machine code in the browser, not C++, which is one of the things that make it hard to port.


I remember there was a MSR talk about compiling via C++, hence the reference.


Do they really need to "Embrace, extend, and extinguish" micropython?


This is not the case here, because:

- It's a different language/project, therefor no embrace or extending happening.

- It's just 2 hackers working in microsoft, we don't have to dismiss their work because their employer actions years ago.

- It's open source.


Will this go the way of .NET micro framework and be abandoned eventually?


so kind of espruino but es6 and typescript? that would be very cool.


Why not just use LUA?


Because of compile-time type safety / static analysis. And I say this as the author of an IDE that bolts those features onto Lua: https://github.com/Benjamin-Dobell/IntelliJ-Luanalysis


Because: weird language?


Love Lua, but damn I hate the 1-based index so much, it is such a pain to adapt to it and a source of bug when implementing algorithms…


I know nothing about lua but I could never touch a 1 based language.

Lua - an entire language that’s an off by one error.


The thing is that LUA is here for quite some time and will be here when Microsoft will ends support for this experiment and it will get slowly depreciated and forgotten.


Static types


Microsoft has a long history of fracturing programming language markets. It's not so much that it is a bad idea, more that it divides an already healthy market. There is MicroPython, Arduino, and Node-RED not to mention all of the C/C++ based systems out there: mBed, Zephyr, and RIOT OS. Another addition is not helpful.


There are tons more out there, Microsoft hardly makes a difference being one more.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: