Hacker News new | past | comments | ask | show | jobs | submit | cnvogel's comments login

Here’s what I don’t get: why the many layers of obfuscation in the build phase? (I understand why in the binary linked into ssh.)

Once the first stage, extracting a shell script from one of the „test“ data blobs, has been found it was clear to everybody that something fishy is going on.

It’s inconceivable that I’ve would have found the first stage and just given up, but then it was „only“ a matter of tedious shell reversing…

They could easily done without the „striping“ or „awk RC4“, but that must have complicated their internal testing and development quite a bit.


> It’s inconceivable that I’ve would have found the first stage and just given up

But what you were looking at might not be the first stage.

You might be looking at the modified Makefile. You might be looking at the object files generated during the build. You might be looking at the build logs. You might be investigating a linking failure. The reason for so many layers of obfuscation, is that the attacker had no idea at which layer the good guys would start looking; at each point, they tried to hide in the noise of the corresponding build system step.

In the end, this was caught not at the build steps, but at the runtime injection steps; in a bit of poetic justice, all this obfuscation work caused so much slowdown that the obfuscation itself made it more visible. As tvtropes would say, this was a "Revealing Cover-Up" (https://tvtropes.org/pmwiki/pmwiki.php/Main/RevealingCoverup) (warning: tvtropes can be addictive)


Reduces the attack area through which this could be found, I expect. Without all the obfuscation someone might spot suspicious data in the test data or at some other stage, but this basically forces them to find the single line of suspicious shell script and follow the trail to find the rest of the stuff added to the build process.


> Here’s what I don’t get: why the many layers of obfuscation in the build phase?

For a one of its kind deployment it would probably not matter. However, deploying to multiple targets using the same basic approach would allow all of them to be found once one was discovered. With some mildly confsing but different scripting for each target systematic detection of others becomes more difficult.


You are not observing the "problem" of entropy running low. Because it doesn't.

You are observing the misguided attempts of fixing this non-problem.


https://www.qualcomm.com/products/qualcomm-9205-lte-modem

The Qualcomm® 9205 LTE modem is our next-generation ...

Qualcomm 9205 uses the latest generation (gen9) Qualcomm® GNSS engine ...

Location

Satellite Systems Support: GPS, GLONASS, Beidou, Galileo ...


Yes, that's certainly a solution.

Or using a certification authority for users (TrustedUserCAKeys in sshd_config), so that any user that has a signed certificate, and owns the corresponding private key, would be allowed to login. No further updates of authroized_keys files needed.

And, to further automate the ssh login, maybe your LUKS container could have a second (Nth) key-slot being a random key RSA-encrypted with the other machine's identity private key? (https://bjornjohansen.no/encrypt-file-using-ssh-key for examples)

But generally, I really dislike the use of LUKS in this case, as I think a filesystem based encryption (not encrypting whole block devices) would make more sense. I understand that this isn't as mature as LUKS, though.


> Also I’m curious about the 1-bit recording. Is that a thing?

Yes. This homemade GPS receiver uses a simple comparator (1 bit) to sample the signal (after amplification to get the noise floor over the decision threshold).

http://www.aholme.co.uk/GPS/Main.htm


Here's a paper that shows reflections (Fig. 3, "S11") and insertion loss (Fig. 4, "S12") on coplanar waveguides, when using a sharp 90° bend, a 90° bend with a champfer and the case where the bend is replaced by two successive 45° turns ("final design").

The frequency scale spans 10..90 GHz.

http://tentzeris.ece.gatech.edu/ECTC09_Rida.pdf


I'm a chip designer and this article kinda cracked me up. For microwave signals on submicron lines, losses get very significant. It seems that PCB design inherits our practices in places these issues aren't so critical.

But then I wonder about applications. If you ignore this rule of thumb, and gang a few hundred boards together sharing a clock... will your signal survive?

Sometimes a 'superstition' is just common sense regarding edge cases - walking under a ladder won't have cosmic effects, but a dropped bucket of paint can leave a mark


> If you ignore this rule of thumb, and gang a few hundred boards together sharing a clock... will your signal survive?

No. But then if you need a few hundred boards off a single clock I assume your phased array radar budget can handle the extra engineering.


Yah anyone who has done RF engineering knows that "just a little bit of capacitance" can be a lot of capacitance.


If you are in RF, you may well be already using "cornerless" smooth traces. Google "topological router"


That or using discontinuities in impedance intentionally. I've designed and had built a handful of successful UHF planar PCB filters using Sonnet. In a lot of them I use very small changes in trace width as the place to put resonating elements. ie, http://superkuh.com/stepped-impedance-bandstop-filter.html

The difference between square corners and compensated is real. And it gets more real if you're working in generic FR4 with 1.6mm thickness and 2-3mm wide traces for the sweet spot between 50 and 75 ohms.


You can play around with this type of simulations quite easily. Have been using free version of Sonnet planar EM solver [1] for this over the years. Helpful in developing the industry standard magic RF intuition.

[1] http://www.sonnetsoftware.com/products/lite/


The "final design" is the chamfered bend with additional vias, a design with two 45° bends was not studied in this paper. Page 2, second paragraph of section A :

> The final design that includes both the chamfered bend and the vias


It's still non-esoteric today:

http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....

ARM Cortex M4 supports adressing individual bits (often registers of hardware peripherals) of memory locations as one additional memory location writing to, or reading from a single bit.

e.g., from the examples of said infocenter url:

   *(uint32_t*)0x20000000 |= (1<<7)
   *(uint32_t*)0x20000000 &= ~(1<<7)
should be equivalent to

   *(uint32_t*)0x2200001C = 1;
   *(uint32_t*)0x2200001C = 0;
Also Analog Sharc DSPs (which, I think, still are being sold with this architecture, and still used, even though I've used them only 10 years ago) alias their memory four times, depending if you want to access it as 16bit, 32bit, 48 or 64bit data.


I think you're missing 'volatile's in your casts


The examples should only view the bit/address mapping, not be a complete example on how to access memory mapped peripherals.


why would you need that in this case?


Probably so the compiler doesn't just optimize out the statements, seeing as they don't seem to have a visible effect on the program execution because they're never read again.


because writing to one location causes the data to change in another (a system register) you don't want the compiler to assume that data in an IO register can be cached in a CPU register


Quote from the Article: """The figure is calculated by multiplying 2 by itself 77,232,917 times and then subtracting 1."""

Would HN agree that multiplying 2 by itself once is 2⨯2 = pow(2,2), twice is 2⨯2⨯2 = pow(2,3) and N times yields pow(2,N+1)?

The Mersenne Prime found is pow(2,77232917) − 1, hence the article got the number wrong?


If you're going that way, you can also interpret it - pardon my programming - as the following:

  int result; 
  for (int i = 0; i < 77232917; i++) { 
      result = 2 * 2; 
  } 
  return result - 1; 
... which obviously results in 3. Also a (mersienne) prime, but not quite as big as you'd expect, and certainly not millions of digits long.


  int result; 
  for (int i = 0; i < 77232917; i++) { 
      result = 2 * 2; <----- you basically said result=4 77232917 times 
  } 
  return result - 1; <--- and then 4-1=3
so of course it will always be 3


That is the joke.


Yeah, I suppose if one wants to be pedantic. But we know what it means.


Dang ol' off-by-one errors.


The ban is specifically for the ability of the watches to covertly record or transmit audio, not about GPS logging.


https://www.python.org/dev/peps/pep-0238/

{Describing the old python-2 behavior:}

-------- Quote: -------

The classic division operator makes it hard to write numerical expressions that are supposed to give correct results from arbitrary numerical inputs. For all other operators, one can write down a formula such as xy*2 + z, and the calculated result will be close to the mathematical result (within the limits of numerical accuracy, of course) for any numerical input type (int, long, float, or complex). But division poses a problem: if the expressions for both arguments happen to have an integral type, it implements floor division rather than true division.

-----------------------

To guarantee the correct mathematical behavior in python2, one would probably have to write:

    def true_math_div(a,b) :
        if type(a) is int or type(a) is long :
            a = float(a)
        if type(b) is int or type(b) is long :
            b = float(b)
        return a/b
as a and b could be int, float, complex, or some object defining the method __div__.


What's wrong about just `float(a)/float(b)`?


not everything that can be divided can be reasonably cast to a float


Also, it's ugly as sin


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: