Hacker News new | past | comments | ask | show | jobs | submit | more vodou's comments login

Poor OpenRISC... Not even mentioned when they talk about RISC.


This is so brilliant! Reminds me of William Burroughs five rules for a revolution in "The Revised Boy Scout Manual":

1. Proclaim a new era and set up a new calendar.

2. Replace alien language.

3. Destroy or neutralise alien gods.

4. Destroy alien machinery of government and Control.

5. Take land and wealth from individual aliens. Time to forget a dead empire and build a living republic.

It was even the first rule to destroy the calendar. He was a genius!


This is not about SDR, it is about "software defined satellite". It is a generalization of the SDR concept, involving the whole satellite. Basically, it is about designing a more configurable satellite, suitable for a wider variety of tasks.


> This is not about SDR

Can the satelite in question grow software defined optics and look at things?

Can it grow software defined manipulators and fix an other satelite?

Can it grow a software defined solar sail?

If the answer to these is no then what makes it different from a spaceborne SDR? (And for the record ain’t nothing wrong with that.)


Seems the "software defined" in these things are mostly about reconfiguring antennas and radio beams. It's perhaps a bit more than a traditional SDR, after all there's not much modern radio communication not using SDRs, even the Mars rovers and satellites have them.


According to the press release, it is just a spaceborne SDR together with a very configurable antenna array, which can be used to provide various kinds of communication links.


The main feature is that you can "beam" a very narrow beam of rf to a moving target like military convoy


That's nothing terribly special with satellites. They've had phased array antennas for a while now.


Fair point.

But I think the space industry tries to communicate something other than SDR when they are talking about software-defined satellites. E.g., satellites as a service, allowing different customers time share a satellite and use it for different purposes by making the capabilities wider.


Industry 4.0 is basically slapping remote features onto expensive industry machines.

Took me a while to realizxe that this is actually not normal and for them its a revolution...


For now, the configurability of the sattelite appears to refer to using a configurable antenna array, which allows the configuration of a variable number of beams, in various directions, and with various directivity properties, presumably besides an SDR.


I think it works: https://www.google.com/search?q=site:gu.com

That gives me 1,150,000 hits. A search for "gu.com" on Hacker News gives just one page of results, though.


I don't know.

Yes, Google indexed many gu.com site hits, so then I believe there must have been a link to those pages somewhere and sometime which the Google crawler found. But searching for those short gu.com URLs now shows no external page hits on Google. (try searching for "gu.com/p/3menc" or "gu.com/p/cqekc" for instance)


RTEMS is fairly popular in space industry. I have been involved in two sateliite projects using RTEMS, one using the somewhat obscure OpenRISC architecture and one using LEON3/SPARC (also quite popular in space industry).


I was an intern at JPL 10 years ago. We used RTEMS + SPARC on the mission I worked on (GRACE FO - Earth science satellite). Unfortunately, I have no memory of how much it was used in other missions


Me too - ESA is a pretty heavy user of RTEMS.


While smaller ESA projects like satellite instruments are usually free to choose which version of RTEMS they use, larger and more important projects use ESA's own version. This version is known as the "EDISOFT" version because it was re-developed/qualified by a portuguese company of that name. It is basically a rewrite of a subset of the RTEMS API. You can read more about it here: https://www.esa.int/Enabling_Support/Space_Engineering_Techn...

Like most of ESA's software engineering tools, it is hopelessly outdated and relies on a patched GCC 4.2.1 which is missing a lot of useful features and bug fixes, especially for the SPARC architecture which is often used in ESA spacecraft. See -mfix-* switches here https://gcc.gnu.org/onlinedocs/gcc-10.4.0/gcc/SPARC-Options....

While RTEMS is GPL-licensed ESA does not want the code to be freely available -- you probably won't find it online. Some years ago they realized that, if they don't upstream their code, they have no chance to keep up to date and started an initiative to improve the situation: https://rtems-qual.io.esa.int, https://rtems-qual.io.esa.int/


This is ESA's fourth RTEMS qualification effort AFAIK. This time they are working with the community and have submitted things like requirements for a subset of RTEMS and requirements based tests. This is tracking the git master and SMP is included. We worked with NASA IV&V folks to get an outline for the RTEMS Software Engineering Guide which provides information expected in a qualification review.

Quietly an RTEMS core developer submitted GCC support for running gcov or gprof on embedded targets and getting the information off. This will be used to get coverage reports which meet ESA expectations. We do have very high code coverage.

ESA has also sponsored formal analysis of some parts of RTEMS.

I am quite happy ESA is working so well with our core developers and community this time.


I am trying to understand what differs the EDISOFT and upstream versions of RTEMS. The only concrete differences I can read out from the linked webpage is basically:

* SpaceWire support (LEON3)

* MIL-STD-1553 support (LEON3)

* kernel and application monitoring services (whatever that means)

Do you know any other differences?


Basically they made a list of a minimal set of functions that they want to support. They deleted all the other functions, then re-implemented the remaining code. For example, they axed the entire POSIX API, only kept the classic "rtems_" API. I don't know about quality of the SpW and M1553 code in the EDISOFT version but the quality of the driver code that Gaisler (now CAES) distributes for their processors is abysmal. Found here: https://www.gaisler.com/anonftp/rcc/src/ -- Among other issues: spaghetti-code, certain error flags not checked/reset resulting in lockups in case of errors, buffer sizes not configurable, unbounded loops/excessive wait delays. All big no-nos for safety-critical embedded software but understandable given limited manpower.


I believe RTEMS is BSD licensed.


Nope, it's not. At least not entirely: https://www.rtems.org/license -- and the ESA version is definitely GPL-licensed.

Nothing against you personally, but I would really appreciate if instead of stating a certain believe, you actually look it up and point out a source for that bit of information.


The project is moving to a two paragraph BSD license and is making great progress.

https://devel.rtems.org/ticket/3053


C is not inherently unsafe. Sure, it hasn't "memory safety" as a feature. But there are loads of applications considered safe written in C. An experienced C programmer (with the help of tooling) can write safe C code. It is not impossible.


That would explain all the vulnerabilities in systemd and Linux. They just aren't experienced enough. Linus needs to get in touch with an expert.


I’m looking forward to your efforts in rewriting it in Rust


So is everyone else! Can't happen soon enough.


SQLite is the most stringently developed C code I'm aware of--the test suite maintains 100% branch coverage, routinely run through all of the sanitizers, and it is regularly fuzzed.

It still accumulates CVEs: https://www.sqlite.org/cves.html.


As I recall, one of the advantages of C over Rust is that the SQLite authors have the tooling to do 100% branch coverage testing of the compiled binaries in C. They tried Rust, but Rust inserts code branches they are unable to test.

The tradeoff then is the small number of bug causing the denial of service bugs listed, vs. not having 100% branch coverage. And they chose the latter.

(The authors also believe Rust isn't portable enough, not handles out-of-memory errors well enough - https://www.sqlite.org/whyc.html#why_isn_t_sqlite_coded_in_a... .)


Are you aware of a way to develop fault free code? Please share this knowledge then, please.


It's easy to develop fault-free code: just redefine all those faults as (undocumented) features!

That's not a helpful answer, but it's basically the same thing you're doing--redefining memory safety vulnerabilities that would be precluded entirely by writing in memory-safe languages as programmer faults.


He's aware of a way to develop memory-corruption-fault free code, obviously.


I guess "experienced C programmers" must be short supply although they have been writing C for years.


The effort is massive and the experience to do so at scale is very rare.


Are Git submodules really that bad? Really? Maybe it's me that haven't yet discovered their true dark side. They're not perfect, but works well for mid-sized projects at least.


They're better now than they were years ago, but yes they're still bad. And they're still second-class citizens for git commands.

Git has been adding more config settings to try to make it less painful, such as submodule.recurse, but that slows down most git-command significantly and the truth is most repos don't actually need to recurse - they just need the top level. So instead people end up writing aliases to try to handle it, which works ok, but it's brittle and people have to know to do it.

And if you have your repo on GitHub, they don't do some simple things that could have made it easier. For example, they could let the repo owner make the git-clone copy-to-clipboard thing have --recurse-submodules in the command you copy, so that cloning will do the right thing. But they don't. I understand why git itself doesn't want to make such things automatic, but GitHub could do it for at least private enterprise account repos.

BTW, I am _not_ advocating whatever this "Git X-Modules" thing is that the link points to. I have no idea what that is, never used it, and don't plan to.


They're fine, so long as you do not have to fork them. Very often we have to work on multiple submodules in parallel, to make that work we use the same name for all feature branches, then our CI and tools understand that these unrelated branches in unrelated repos actually belong together, but I'd prefer not to have to do that. At least it taught me why so many big companies use monorepos.


They are implemented in a maximally safe way, which generally means they leave files / folders / etc lying around when you least expect it which goes on to cause later annoyances, because doing otherwise could mean deleting data you didn't know would be deleted.

Say you're git, and someone has a submodule that goes from having:

    sub/
      .gitignore = a/ignored.txt
      a/
        tracked.txt
        ignored.txt
to this, when they `git checkout` some other newer or older or totally-separate-history branch (possibly several layers above this submodule):

    sub/
      b/
        tracked.txt
what do you do with sub/a/ignored.txt? What if the whole submodule was removed?

Git's answer to every single question like this is to either fail to perform the submodule version change, or leave a now-untracked-and-not-ignored sub/a/ignored.txt file in the directory, which leaves you with an un-clean checkout that it'll warn and complain and possibly conflict on.

It's highly unergonomic, but reliably avoids silently doing anything that might cause you to lose data. The awful ergonomics are what people generally hate about it. They'd rather it made some consistent decisions so it Just Worked™ in nearly all cases. But that's not Git-like.


That's because you haven't had all code messed up when someone pushed in the wrong order :-)


How can pushing in the wrong order mess up all code? The worst thing I can imagine is pushing the superproject with a subproject update, but not pushing the subproject. That would end up with an unbuildable commit in the superproject, as you can never check out its associated commit from the subproject, but I don't see how that's close to "messing up all code".



Notice the post is 10 years old and some of the complaints can get solved with the config `submodule.recurse true` https://stackoverflow.com/a/49427199


Good question, I was wondering the same. I have never really had an issue with submodules, and I am not sure what the killer feature that this adds over submodules.


They're much easier to consume now than they used to be but at the same time they're usually broken by default. Adding them to a project is not seamless or invisible. They do not "just work" with the same commands your team was previously using.

They're still pretty annoying to actively work in.


Maybe that is the problem - you should not use them for stuff that needs to be actively changed all the time.

We keep only DTOs in sub-modules so you really update it if you change property on some communication objects.

This way you also don't want them to be invisible - you really want to see and have people notice this was updated and actively choose what to do with it.


They have a reputation for being bad, which is enough for a company like the one in the OP to use as a marketing hook.


The most annoying problem is CI, i cannot even use a private git submodule in Github Actions without creating a personal token, passing it to secret and use some git command


Which seems more a problem with Github tooling, and less with git submodules.


For unimplemented stuff / placeholders I highly recommend exceptions instead of TODOs in any form (if suitable and your language of choice supports that). E.g. in Python:

  def my_fancy_function():
      raise NotImplementedError


But why only one or the other? The TODO is for your IDE to tap into so you can quickly and easily find outstanding work before the code is ready to be submitted as MR/PR (most IDE and even many simpler code editors have TODO/FIXME listing built in).

The exception is for runtime behaviour until the TODO is resolved: it's something you do mostly for yourself, not others. By the time others see your code, your TODOs have either been addressed, or they're acceptance criteria for follow-up work in your project tracker.

And that brings us to the third part that we _also_ need: the issue that the work the TODO talks about is for. Either as a checkbox task in a larger piece of work, or if it's big enough, taking up the entire issue. (because good project management means knowing what tasks are required/which work is outstanding, without opening a single file)


>... CAN is one of the least secure BUS's imaginable.

Can you elaborate on this?


No security or authentication. IIRC, any attached device has full control over the physical layer by design. I don't really see why this is a problem in a car with isolated buses for safety-critical components.


Yeah... that's like saying a car has a security risk because someone can go in the wheel well and slash the brake lines...


Yeah, that is a security risk

We choose to ignore it because it's rarely exploited, but by that logic, we should just ignore all zero-days.


Many newer vehicles contain systems, OEM and/or after-market, that are permanently connected to the internet via cellular modem. Other systems with insecure RF tech used for various gimmicks. Other systems that communicate with external and potentially malicious devices like chargers. Etc. Most of these systems have enough access to (in)directly destroy or booby trap the car. My car is able to receive ECU(!), firmware, software updates OTA from the manufacturer. These critical systems are just as "closed" or "isolated" as cloud-enabled "CCTV." Scary stuff.


A very simple tool for transferring files between two computers. Linux <-> Windows should just work, uPnP handling, etc - just a crazy simple way to transfer a file without setting up Dropbox, ftp servers or whatnot!


For files that are not private I use: https://btorrent.xyz/

People usually recommend https://file.pizza but it just doesn't work for me.

Now that I think about this it would be beautiful if there was something like stunnel but for NAT hole punching and what not. You would do:

  $ send-file FILE
  qk11c14pmq7maeod
  $
You would send the unique id (that could be also something that gives a QR code to scan or something like random words) over a separate channel. Then you would "receive-file qk11c14pmq7maeod" on the other end. It would be a simple binary that would be easy to embed and call through some pretty UI. Probably generation and scanning of those possible QR codes would have to be a part of the package. The problem would be who would finance servers needed for NAT traversal?


You're looking for croc or magic-wormhole.


It is strange that most of these tools seem to involve sending your data half way around the world. Guess local doesn't pay?

I just use good old SMB/network sharing - it's not much harder than setting up some program on two devices. Works between Windows/Linux/Android, fast and good enough.



Setup SCP on your windows box, everything else already "just works".


it's not a cli tool but https://snapdrop.net/ is local network file transfer


transfer.sh seems pretty close to what you’re looking for


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: