Hacker News new | past | comments | ask | show | jobs | submit | Cu3PO42's comments login

I now use AirPlay to extend a MacBook screen to my TV and play the stream that way. But it's so needlessly complicated compared to before :/

What a strangely beautiful sight. While I was excited to see ship land, I'm also happy I get to see videos of this!

Yes, both spectacular and beautiful. I guess Starship can now say what the legendary comedy actress (and sex symbol) of early cinema Mae West said:

"When I'm good... I'm very good. But when I'm bad... I'm even better." :-)

Combined with another tower catch, that's two spectacular shows for the price of one. Hopefully the onboard diagnostic telemetry immediately prior to the RUD is enough to identify the root cause so it can be corrected.


I felt.. bad watching that breakup, it reminded me of Columbia.

Which coincidentally launched 22 years ago today: https://en.wikipedia.org/wiki/STS-107

Seeing it streak across the sky just reminded me of how important this is and how critically SpaceX must get it right before sending anyone up. I will never forget the trepidation of watching Bob and Doug go up and in tears when it was clear they made it.

OTOH I remembered Columbia too and I felt good knowing that Starship is being tested thoroughly without jeopardizing the crew.

The space-shuttle could not fly to the orbit automatically. It had to have people on board, and the first flight, IIRC, came close to a disaster.


I remember being woken up by the thunder from Columbia.

Lost it over the years but I used to have a photo of about 20 vans of people parked on our property doing the search for debris. Don't think they found any on our land but there was a 3 ft chunk about 5 miles down the road.


I remember waiting for the sonic boom, that never came…

I don’t know why you’re getting downvoted, but I thought this too.

It's a weird deja vu feeling of uneasiness. I can't explain it, but I think also Columbia really hit me as my generation's Challenger.

Meta-commentary is annoying (yes, I realize the irony.)

As long as the debris has no effect wherever it lands, I agree with you

A lot of flights seem to be diverting to avoid it...

https://bsky.app/profile/flightradar24.com/post/3lfvhpgmqqc2...


Does SpaceX bother with NOTAM for its launches?

https://en.wikipedia.org/wiki/NOTAM

It seems like the flights should have been planned around it so no diversion would be needed.


They do but its not clear to me whether the area where it broke up was actually included in the original NOTAM. The NOTMAR definitely does not according to the graphic shown on the NASASpaceflight stream. They are still live so I can't link a time code but something like -4:56 in this stream as of posting: https://www.youtube.com/watch?v=3nM3vGdanpw

Since i couldn't find the time code in the video, i put a map together with both NOTAM and NOTMAR.

map: https://github.com/kla-s/Space/blob/main/Map_NOTMAR_NOTAM_Sp... description: https://github.com/kla-s/Space/tree/main

Lets hope this is the year of Linux desktop and i didn't violate any licenses or made big errors ;)


My understanding is that there are areas which are noted as being possible debris zones across the flight path, but that aircraft are not specifically told to avoid those areas unless there an actual event to which to respond.

If my understanding is correct, it seems sensible at least in a hand-wavy way: you have a few places where things are more likely to come down either unplanned or planned (immediately around the launch site and at the planned deorbit area), but then you have a wide swath of the world where, in a relatively localized area, you -might- have something come down with some warning that it will (just because the time it takes to get from altitude to where aircraft are). You close the priority areas, but you don't close the less likely areas pro-actively, but only do so reactively, it seems you'd achieve a balance between aircraft safety and air service disruptions.


Actually, this video is a good indication for exactly what transpired:

https://www.youtube.com/watch?v=w6hIXB62bUE

It's ATC audio captured during the event.


This video, the map elsewhere in this subthread, and the stream recording give a nicely detailed view into what went down. It seems like everything went like it was supposed to in terms of pre-warning, but based on the video the information didn't make it to pilots with coinciding flight plans until after the fact.

As far as I understand airline pilots have a high level of authority and diverting probably was the right call depending on the lag between seeing it and knowing what it was or if there was a risk of debris reaching them. They wouldn't necessarily know how high it got or what that means for debris.


Yeah... and ATC for a good while didn't have any estimate for time to resolution. So, do you run the airplane's fuel down to a minimal reserve level in hopes that the restrictions might lift... or just call it done and divert?

I think it's an absolutely reasonable choice to just say comfortably divert rather than try to linger in hopes of it not lasting too long and possibly ending up diverting anyway... but on minimums.


Understandable, but an over reaction. Any debris not burning up is falling down after minutes.

Would you bet hundreds of lives and millions of dollars on that?

Yes. Space debris near orbiting speeds doesn't fall straight down, it's simple physics.

If anything planes much further downrange (thousands of km) should be diverted, not immediately under the re-entry point.


The planes diverting were downrange. Also, I doubt they had much information to go off, and were essentially flying blind about where the debris were unless they had a direct line to NORAD.

Do you have a better explanation why they are doing donuts over the pacific at the time of reentry, then were diverted?

https://www.flightaware.com/live/flight/ABX3133

https://www.flightaware.com/live/flight/N121BZ/history/20250...

https://www.flightaware.com/live/flight/NKS172


I was on r/flightradar24 and someone was listening on ATC and heard that one of the flights declared emergency due to fuel.

Other planes were also caught up in the chaos but SJU was at capacity apparently


The ATC is up on YouTube - I heard it on the vatsim channel. ATC would not let pilots transit the designated danger airspace without declaring an emergency. So they did.

I don't have. Maybe they were indeed diverted because people got scared? Still seems pointless given the distances involved. Most reports are coming from social media / people watching flightradar24, and news media is just picking those up.

There are several, all at the same time, all in the same area, where the debreis was seen.

Why do you think it is pointless?

If I am a pilot and the tower says "debris seen heading east of Bahamas", I probably wont want to keep flying towards that direction.

Yeah, it is probably low risk, but I dont have a super computer or detailed map of the Starship debris field or entry zone.


> donuts over the pacific

Atlantic


Doh!

Nuts!

It wasn’t at orbital speeds yet.

Over 21000km/h when it broke up, compared to ~28k for stuff orbiting in LEO. Should still go quite far.

Yes, although drag is gonna be… substantially higher like this as well.

Does melting down not reshape metallic particles into ideal droplett parts ?


More as long as there were no humans onboard

Looks like something out of a sci-fi movie.

The number of SpaceX video clips that I know are "actual things really happening" which still activate the involuntary "Sci-Fi / CGI effect" neurons in my brain is remarkable.

Yeah. I know that feeling.

That tower catch. That _had_ to be a new version of Kerbal, right? The physics looked good, but there's no way that was real...


Indeed. The one that still flips a bit in my brain is the two Falcon rockets landing in unison side by side. I'd say it was high-end CGI except no director would approve an effects shot of orbital rockets landing in such a perfect, cinematically choreographed way.

It would just be sent back to ILM marked "Good effort, but too obviously fake. Rework to be more realistic and resubmit."


Just to link that: https://www.youtube.com/watch?v=wbSwFU6tY1c&t=1793s

Such an unbelievable moment. And I also think an indicator of how much better society could be if we focused more on doing amazing things. The comments on YouTube are just filled with hope optimism and general awesomeness. FWIW that link goes straight to the moneyshot - it's always so much better if you watch it all the way through. It's an amazing broadcast.


When I was in elementary school back in the 1970s, I read every sci-fi book in the tiny school library. They were all old, even then. Early stuff by Asimov, Heinlein and Bova. Paperbacks on cheap pulp with cover paintings of rockets sitting upright on alien terrain. Tiny people in space suits climbing down ladders to explore a new world.

With the Apollo moon landings in recent memory, I'd read those sci-fi books late at night with a flashlight under the covers of my bed and then fall asleep thinking about how "I'll still be alive 50 years from now. I'll get to actually live in the world of the future. Maybe I'll even work in space." And by the time I graduated from high school it was already becoming clear things were going much to slow for me to even see humans colonizing Mars. And that was reality until about a decade ago.

So, yeah. Watching the live video of the first successful Starship orbital launch with my teenage daughter... I got a little choked up, which surprised me. Felt like discovering a very old dream that's been buried too long. And somehow the damn thing's still alive. Or maybe I just got something in my eye. Anyway, I know it's too late for me to ever work off-planet. But maybe not for my kid... so, the dream lives on. It just had to skip a generation.


Thank you for this beautiful comment. I could have written it word for word. I still watch every Starship launch with my kids, and CRS-7 was the first Falcon launch that we missed watching live. At that time we were waiting months between launches. And I'm currently petting a dog named Asimov while writing this.

SpaceX brought our childhood dreams back. But more importantly, SpaceX is bringing our naive childhood expectations to fruitation.


Seeing a rocket land vertically goes against almost 70 years of what we "know" about rockets. Falcon 9 rockets landing on legs seem unnatural enough; now we have a rocket, the size of a 20-story building, landing on chopsticks.

There are lots of vertical-landing rockets ... in science fiction, and only before Sputnik in 1957. Once actual space programs came about and lots of engineers understood just how difficult landing a rocket is compared to launching it, they all went away. Fictional vehicles became more and more complex to make them "realistic" (that is, consistent with real spacecraft on the news), or just didn't bother with the details at all and went to quasi-magic technologies like in Star Wars and Star Trek.

SpaceX is taking us to the future by going with something from the past.


SpaceX landing and catching boosters is amazing, but landing rockets is not new: all the Apollo LMs, indeed everything ever landed on the Moon was done with "vertical-landing" rockets.

Not to rain too much on your harping, but the DC-X program did vertical landing 30 years ago.

Yes, and that was all the experimental program did. No humans on board, no payloads, no orbit, not even suborbital as they stayed close to the ground.

The Falcon 9 puts humans into orbit then turns around and lands not far from the launch tower. It's then brought in for maintenance and a few weeks later launching again - some of them have done 20 flights.


You’re comparing an experimental program that lasted 6 years with a company founded 22 years ago. How many payload flights did space-x do 6 years into its existence?

No, you made that comparison two posts up. I just replied to you ))

Nice, what happened of it?

It happened at about the time budgetary winter happened for the us space budget, so there was no follow-up on the demonstrator.

>What a strangely beautiful sight.

"My god, Bones, what have I done?"


It’s a pretty expensive way to make fireworks.

Excitement guaranteed

This works as long as your Dockerfile is reasonably reproducible and does its best to lock dependencies. However, this approach has failed me a couple of times in the past. For example, I rebuilt a container some weeks later, in the meantime a new version of clang had been released that just so happened to break my build due to a bug.

I personally use Nix these days, but the complexity is too high for me to recommend it to everyone for every software project.


Yeah, Nix pretty much solves this problem. The other day I wanted to try a really old version of spaCy for fun/historic interest. spaCy 1.8.2 installed freshly from the binary cache on NixOS-unstable as if it was still April 2017.


Anecdotally, I disagree. Since the release of the "new" 3.5 Sonnet, it has given me consistently better results than Copilot based on GPT-4o.

I've been using LLMs as my rubber duck when I get stuck debugging something and have exhausted my standard avenues. GPT-4o tends to give me very general advice that I have almost always already tried or considered, while Claude is happy to say "this snippet looks potentially incorrect; please verify XYZ" and it has gotten me back on track in maybe 4/5 cases.


Zod has been a great boon to a project I've been working on. I get data from multiple sources where strong types cannot be generated and checking the schema of the response has caught a great number of errors early.

Another feature that I use intensively is transforming the response to parse JSON into more complex data types, e.g. dates but also project-specific types. In some situations, I also need to serialize these data types back into JSON. This is where Zod lacks most for my use-case. I cannot easily specify bidirectional transforms, instead I need to define two schemas (one for parsing, one for serializing) and only change the transforms. I have added type assertions that should catch any mistakes in this manual process, but it'd be great if I didn't have to keep these two schemas in sync.

Another comment mentioned @effect/schema [0], which apparently supports these encode/decode relationships. I'm excited to try it out.

[0] https://effect.website/docs/guides/schema/introduction


TypeBox[1] also supports bi-directional transforms.

[1]: https://github.com/sinclairzx81/typebox


Thank you for the pointer! I will certainly consider TypeBox as well when the time comes to migrate.


I use schema extensively and I can tell you it hits the sweet spot for your use case. We have lots of similar use cases.


Ferrocene [0] exists today! I'm mainly interested in the space from the point of view of a theoretical computer scientist, so I'm not sure if there's additional boxes that need to be checked legally, but it's looking pretty good to me.

[0] https://ferrocene.dev/en/


In the sense that there are a variety of requirements that need to be checked, there are. Each industry is different. Ferrocene has mostly been driven by automotive so far because that’s where the customers are, but they’ll get to all of them eventually.


I really appreciate your reply!

> In the sense that there are a variety of requirements that need to be checked

Does "requirement" in this context refer to the same thing as a particular ISO/EN/... standard? Or do you mean that there are a multitude of standards, each of which make various demands and some of those might not yet be fulfilled?

My wording was much more ambiguous than I intended. What I meant to convey was that I don't know what hurdles there are beyond conforming to the relevant certifications. I.e. in the automotive conetext, Ferrocene is ISO26262 certified, but is that sufficient to be used in a safety-critical automotive context, or are there additional steps that need to be taken before a supplier could use Ferrocene to create a qualified binary?


No worries! And once again, just to be clear, I don't work directly in these industries, so this is my current understanding of all of this, but some details may be slightly off. But the big picture should be correct.

It means a bunch of things: there are a multitude of standards, so just ISO 26262 isn't enough for some work, yes. But also, safety critical standards are different than say, the C standard. With a programming language standard, you implement it, and then you're done. Choosing to use a specific C compiler is something an organization does of their own volition, and maybe they don't care a ton about standardization, but being close enough to the standard is good enough, or extensions are fine. For example, the Linux project chose to use gcc specific extensions for C, and hasn't ever been able to work with just standard C. Clang wasn't possible until they implemented those gcc extensions. This is all fine and normal in our world.

But safety critical standards are more like a standardized process for managing risk. So there's more wiggle room, in some sense. It's less "here is the grammar for a language" and more "here is the way that you quantify various risks in the development process." What this means is, so like, the government has a requirement that a car follows ISO 26262. How do you demonstrate that your car does this? Well, there are auditing organizations. The government says "hey, we trust TÜV SÜD to certify that your organization is following ISO 26262." And so, if you want to sell a car, you get in touch with TÜV SÜD or an equivalent organization, and get accredited. To put it in C terms, imagine if there was a body that you had to explain your C compiler's implementation-defined behavior to, and they'd go "yeah that makes sense" or "no, that's not a legitimate implementation." (by the way, I am choosing TÜV SÜD because that is the organization that certified Ferrocene.)

Okay, so, I want to sell a car. I need to write some software. I have to convince TÜV SÜD that I am compliant with ISO 26262. How do I do that? Well, I have to show them how I manage various risks. One of those risks is how my software is produced. One way to do that is to outsource part of my risk management by purchasing a license for a compiler that also implements ISO 26262. If I was willing to go to the work of certifying my own compiler, I could use whatever I want. But I'm in the car business, not the compiler business, so it makes more sense to purchase a compiler like that. But that's fundamentally what it is, outsourcing one aspect of demonstrating I have a handle on risk management. Just because you have a certified compiler doesn't mean that any code produced by it is totally fine. It exists as one component of the larger project of demonstrating compliance. For example, all of the code I write may be bad. So while I don't have to demonstrate anything about the compiler other than that it is compliant, I'm gonna need to demonstrate that my code follows those guidelines. Ferrocene has not yet in my understanding qualified the Rust core or standard libraries, only the compiler, and so if I want to use those, that counts as similar to my own code. But this is what I'm getting at, there's just a lot more work to be done as part of the overall effort than "I purchased a compiler and now I'm good to go."

I hope that helps.


That was a very in-depth reply and is very appreciated. One point I did not expect that core and alloc are not qualified yet. In any case, you motivated me to do some research of my own to fill in the gaps in my own understanding. What follows is my attepmt to summarize all of this in the hope that you and anyone else reading it may also find it helpful.

I want to take a step back: why does the automative industry care about certain qualifications? Because the legaslative mandates that they follow them so that cars are "safe". In Germany the industry is required to follow whatever the "state of the art" is. This is not necessarily ISO 26262, but it might be. It might also be one of the many DIN norms, or even a combination thereof.

ISO 26262 concerns itself with mitigating risks and hazards introduced by safety-critical systems and poses a list of technical and non-technical requirements that need to be fulfilled. These concern both the final binaries and to some degree the development process. As you pointed out, the manufacturer needs to ultimately prove to some body that their binaries adhere to the standard. Use of a qualified compiler does not appear to be strictly necessary to achieve that. However, proving properties of a binary that is the result of a compilation process, is prohibitively difficult. We'd rather prove properties of our source code.

However, proving properties of source code is only sufficient to show properties of the binary if the compilation process does not change the behavior of the program. This is where having a qualified compiler seems to come in. If my compiler is qualified, I may assume that it is sufficiently free of faults. Personally, I'd rather have a formally verified compiler, but that's obviously a much larger undertaking. (For C, CompCert [0] exists.)

Now, as you point out, none of this helps if my own code is bad. I still need to certify my own code and Ferrocene can be a part of that. However, to circle back to my prior question of additional boxes that need to be checked: Yes, any Rust code written (and any parts of core, alloc, and std that are used) needs to be certified, but Ferrocene's rustc is ready to be used in software aiming for ISO26262 compliance today. No additional boxes pertaining to rustc need checking; although, qualified core and alloc would certainly be helpful.

[0] https://www.absint.com/compcert/


That looks very interesting.

I think these sort of activities must come from outside because the core Rust team has currently no experience in these areas.


I use GitHub Copilot at work. We were (and still are) not allowed to use our private accounts, but recently got company accounts we are allowed to use on codebases classified "confidential" or lower. We also have an internal chat interface to OpenAI's models with a similar restriction. I understand there's some extra agreements with Microsoft regarding our data.


It is definitely available today and I believe it was available shortly after the announcement.


The text-to-text model is available. And you can use it with the old voice interface that does Whipser+GPT+TTS. But what was advertised is a model capable of direct audio-to-audio. That’s not available.


Interestingly, the New York Times mistakenly reported on and reviewed the old features as if they were the new ones. So lots of confusion to go around.


It seems like they implemented permission checks purely in the frontend, and not just on one endpoint, but almost everywhere.

While it is conceptually easy to avoid this, I have seen similar mistakes much more frequently than I would like to admit.

Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.


> Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.

I don't see those as the same. Buffer overflow checks are a very specific implementation (and language) detail and can happen absolutely anywhere in a codebase. Permission checks happen at a specific boundary and are related to how you design your application.

Whenever I had any say on how a project was developed, I'd always insist on a clear separation between the development of the backend API and the frontend client code. In my experience, it makes things like this much easier to avoid (and test for). You also get a developer API for "free" (which to be honest, is the main reason I prefer to do it that way).


You shouldn't be touching the server-side code if you find this hard to keep straight.


Junior developer probably opened a Jira ticket, saw a UI of a permission dialog, and did exactly that task with nobody senior enough to know better. That's how you reproduce the bugs that were in-fashion 15 - 20 years ago in my experience!


There two new front end guys in the team, understaffed backend, but manageable, nothing as dramatic, but I constantly have to remind the benefits of doing as much things as possible in the backend and keep the logics high level on the front is aggravating.


Also no review or planning anywhere in that process.

I'm semi-confident that if a Junior were to talk to another Junior before starting about things to look out for, and then the code was reviewed by say a third Junior, they would not have this bug.

Call me naive, but I don't think Juniors are as oblivious as they are made out to be


I should add this works best if you hire with some diversity, such as one Junior with a preference for security topics.

If you go up to the counter and yell "10 React devs please", don't be surprised


As a long time engineering manager, I have significantly benifited from this: leveling me up in WCAG/ADA, strcmp timing attacks, performance timing. Many managers start younger than we maybe should, and the burnout that I had in my early years pulled me out of my passion to learn more about comp sci. It was the random enthusiasm of younger folk that, in those times, was my exposure to topics I hadn't dived in to yet.

I have witnessed hiring, listening, and supporting early-career enthusiasm has significantly improved every startup I've had the joy to be a part of.


Seems like a solid development cycle.

Junior tries something -> hit production

I do not see multiple issues with this.


9 out of 10 PMs love this one hack to boost velocity

they were probably thinking what a 10x engineer they'd found to be so rapid at delivery...


This is so true. I've seen this so many times. The darling of product, who can deliver so fast. They leave a trail of smoking rubble and half working features behind them.


Why can't you be more like darling of product over there.

What do you even do around here; all you seem to ever do is take darling of products code and make a few changes (which I don't understand) and committing it as your own work. It appears you are either trying to take credit for darling of product or are sabotaging their amazing 10x work.


Are you my manager


Hey, it worked on my machine


Ultimately, I don't disagree.

However, I also try to make it a habit to not blame people for not knowing something. This presents as a structural problem in that company: they needed to hire people who do know how to secure server code and put them into a position to do so. Blame the company and those who decided to save every last penny in personnel cost.


> I also try to make it a habit to not blame people for not knowing something

There’s a point where critical thinking skills come into play, I’ve seen people walked off the premises for doing stuff like this with customer data. Actual seniors who have never been blamed for anything are suddenly intolerable threats to the company because they didn’t bother to check what they were doing and forced the company to disclose a breach.


Sexual preferences and such are special category data, and if you are an engineer dealing with this stuff you should treat it as though data breaches could get someone killed.

Sure, part of the responsibility of this is on management, but it's absolutely on the engineers too.


So if someone who can't drive, finds a car with the keys in it, and starts driving it, and causes an accident, who do you blame?

And do you have any reason at all to believe the backend people didn't know? They wrote a fair amount of code and infrastructure, so they cannot have been blank slates.


> who do you blame?

The people who hired the person who can't drive and gave them a job as a driver.

> do you have any reason at all to believe the backend people didn't know?

Well, either they knew and wanted to implement proper auth and were prevented from doing it, or they knew and couldn't be bothered, or they didn't know that their backend system wasn't properly locked down and were too incompetent to have a clue.


People getting paid to create software should know better then these basic mistakes.


than


Yeah, the people who put those people into the position to touch server side code are to blame. But then the OP is right: the people having made these code changes should really not have touched anything server side or even anything security relevant in the beginning.


They shouldn't have to - the architecture should be made in such a way that permission checks are done without you specifically having to call them every time. This is the entire reason middleware exists!


Well, but apparently they let people create the architecture who just shouldn't have touched the backend code. That's the whole point. Since it was not just a single endpoint or so - it was everywhere!


I agree they should just quit but that requires experience to understand too. By the time they've learned that, they have also learned that client-side access controls are decorative.


right*


u are wise and right


Eternal September. Everyone starts somewhere, it’s just all the time now. In ten years, the dev will explain to a junior how bad they messed up, and why they have to validate this way. Well, I don’t know, but that’s what I hope.


yeah now imagine another engineer go "my first bridge just fell apart the first time a real truck tried to cross over it lol" or "man my first plane crashed so hard"...


Ya know, the Roman tradition was, you gotta stand under the bridge while the army marches over it. If it collapses, you die too. Maybe there's something to having nudes of that dev.

Real engineering is expensive. And hard. moving atoms around is tough. I've never cut stone, but I've melted and cast copper and aluminum. That's real and dangerous work.

Computation is cheap and plentiful. And I kinda like having full control of "stuff". But maybe we do need licensing or personal liability. If I could wave a magic wand, and make that exist, I don't really know what rules I'd put in place.

How do you think people should get skilled up?


> How do you think people should get skilled up?

You didn't ask me but I can give you my answer: not on prod and with a lot of reviews!


> Maybe there's something to having nudes of that dev.

Most users of these sorts of app don't pay enough attention to security to care. Do you really think that most developers are any better?

Most developers are just normal people who happen to be able to write a bit of code and convinced someone to employ them. Just like anyone else, far too many live under the delusion that "it can't happen to me."

Translation: making them eat their own dogfood and risk their own embarrassment won't help; they would have to know better, first! =)


That's an interesting idea. Bridge builders and flight sims are used in industry to test to see if a bridge design will fail or if a plane will crash. They're not limited to oversimplified and fun video games.

I wonder if there's a market for a "write a CRUD app and let it loose on the Internet and watch it get pwned" simulator/game.


That's hiring a pen tester, and there is a market for it, but companies don't do it as much as they should because it costs money while the app already "works" and brings in revenue. Of the 3 I've worked at, only one had yearly pen tests done.


No one hires someone to test what happens when a bridge is shot with a missile from 6000 miles away. The bridge "works" in the same way that the software "works".


A software penetration tester has the same techniques and suite of tools for pwning as "the internet".


I don't see how that statement follows mine. Can you connect them at all?


I thought you were making the comparison that a pentester is like a missile shot at a bridge whereas the internet is the army walking over the bridge.


Oh, I see. No, the missile is a hacker attacking your software remotely. Bridges are just accepted that they will collapse if deliberately attacked by a determined attacker. Software is held to a higher standard, not a lower one.


Yet after decades of this messaging, we still have these people touching the server-side code. Is it likely another few years of the same messaging will fix it?


I work in security and I don't trust myself to tie my shoes correctly every day.

OPs comparison is great. Bounds checks are easy. There are many overconfident C++ programmers that say they would never introduce a vulnerability like that. But it still happens, because in this class if vulnerabilities it's often enough to forget one check.


They didn't ;)


I feel bad now I'm sorry to the person I replied to lol. I didn't mean 'you' I meant a generalized third party person.


I once caught a webdev doing frontend authentification with a plain javascript dialog. Yes, simple! Put the password in the JS, and do a simple comparison. Why did I notice? Because the owner of the lamp account contacted me that all their data was suddenly gone. Checked the logs, and yep, Google Bot clicked all the "Delete" links in their internal management view. Simply because JavaScript is opt-in :-) Called the developer and educated him on what he just did. I lost a lot of trust in web people that day.


This can happen very easily I think if one uses "automatic db APIs" on the backend. I'm thinking of some automatic graphql setups for example.

I flag it whenever I see it, but it is very worrying how little thought is sometimes put into the scope of client APIs.


This is unfortunately quite common in mobile apps, because "why would a user look closer in a mobile app".

I want to blame juniors, the no-code and ai-code crowd, but I'm as lazy as they are and will just shake my head and move on.


I have recently used jd with great success for some manual snapshot testing. At $work we did a major refactor of $productBackend, so I saved API responses into files for the old and new implementation and used jd (with some jq pre-processing) to analyze the differences. Some changes were expected, so a fully automatic approach wasn't feasible.

This uncovered a few edge cases we likely wouldn't have caught otherwise and I'm honestly really happy with that approach!

One thing I would note is that some restructurings with jq increased the quality of the diff by a lot. This is not a criticism of jd, it's just a consequence of me applying some extra domain knowledge that a generic tool could never have.


> I would note is that some restructurings with jq increased the quality of the diff by a lot.

I would really like to know more about these restructurings. Would you mind dropping me an example here or at https://github.com/josephburnett/jd/issues please? There are somethings I won't do with jd (e.g. generic data transformations) but I do plan to add some more semantic aware metadata with the v2 API.

Also, I'm glad this tool helped you! Made my day to here it :)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: