Hacker News new | past | comments | ask | show | jobs | submit login
Redbox left PII on decommissioned machines (digipres.club)
294 points by NotPractical 75 days ago | hide | past | favorite | 162 comments



> Redbox.HAL.Configuration

> .ConfigurationFileService implements IConfigurationFileService

> STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING

> JSON FILE YOU ENTERPRISE FUCKERS

I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

It makes sense to abstract configuration behind an interface so you can easily mock it our or implement it differently for unit testing.

Perhaps you also want to have some services configured through a database instead.

This isn't a ConfigurationFileServiceFactoryFactory.


> I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

Then don't do that, if in the real world it'll read a fucking file, then test with reading a fucking file. Tests aren't there to just be passed, they're to catch problems and if they're not testing the same workflows that the code will see IRL then the test is flawed. The first test should be reading a fucking file and that fucking file could be full of all sorts of garbage.

Same goes for non-fucking files.


Those are integration tests. Integration tests are great, but not when you want to run thousands of them in a few minutes. And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

I'm happy to have a long running integration test suite that runs on a build server.

But while working on a project, I need fast running unit tests that I can edit and run to get fast feedback on my work. I find that "time to iterate" is key to effective and enjoyable development. That's why hot module reloading is an amazing innovation for the front-end. The back-end equivalent is quickly running affected unit tests.

So I'd rather unit test my FooFileReader to make sure it can read parse (or not) what's in various files, and unit test my service which consumes the output of my FooFileReader by either parameterising the FooFile result or having an IFooFileReader injected. ( Either works to separate concerns. )

While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

I can even test what happens if read failures do happen, because I can mock my mock IFooFileReader to return a FileNotFoundException or any other exception. I'd rather not have to force a real-world scenario where I'm getting such an error.

In a functional world, it's the difference between:

    function string -> result
and

    function string -> parsedType -> result
The second is cleaner and neater, and you can separately test:

    function string -> parsedType
    function parsedType -> result
The second is more testable, at the cost of being more indirect.

Interfaces and factories are just an idiomatic .NET way of doing this indirection over services and classes.

Of course you can also write more in a functional style, and there are times and places to do that too.


You're mixing definitions - integration tests concern testing the "integration" between all parts of a solution. It has nothing to do with reading a JSON file, its perfectly acceptable to read from a JSON file and use its data in a unit test.

Also reading / parsing a JSON file is fast enough for hot reloads / auto rerunning unless you have multiple GB files - so the argument for speed makes no sense. I'd argue it's slower as a whole to have to code up mocks and fill in the data than copy paste some json.

I do agree with the second being neater, however past a certain point of enterprise coding it's a negligible difference compared to the overall complexity of the code - so taking a shortcut and making your tests simpler through JSON files actually ends up being the cleaner / neater solution.

>While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

Since you're given that - use it. If your test fails because a "low level" dependency is failing it's indicating something is seriously fucked up on your machine.


It's an absurdly common mistake though, on the level of hungarian notation being misused and having to split it into two names.

Basically too many unit testing tutorials were simplified too far, so the vast majority of people think a "unit" is syntactic rather than semantic. Like, a single function rather than a single action.


> While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL".

That sounds pretty squarely in the "you ain't gonna need it" category. If your test harness cannot make a temporary directory and populate it with a copy of the test config file that's stored in the same SCM repo as the test case code, then you simply have a broken CI server. There's no need to complicate your codebase and make your tests less realistic all to avoid hypothetical problems that would almost certainly break your test suite before the test case gets around to attempting an fopen. Just read the damn file.

There are more complicated instances where mocking and dependency injection is needed. "fopen might fail on the CI server" usually isn't one of them.


> And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

Reading a file is a fast operation these days. Re-reading a file shortly after a read is less than a memory copy.

Making the structure more complicated so that you can avoid reading a file during unit tests is a poor investment of resources - that complexity will haunt down the team forever.


The vast majority of codebases that spam factories are misusing the pattern and simply add more boilerplate and abstraction bloat for something that is easily expressible in true idiomatic C# itself.

You see it everywhere where someone handrolls a "ServiceResolver" or "DtoMapper" that wrap what DI or ORM already handle on your behalf, simply because it is consistent with ancient badly written code that originates from practices that came from heavier Java and before that C++ codebases.


Unit test are nice to have if you want to make test coverage or have sufficient time to implement them properly. In practice they contain only vague assumptions (the test passes, but the integration stops due to those assumptions being false) or contain things any basic code review should catch (and if you keep paying peanuts they won't do that so you make more unit tests).


A good interface is testable, this is how you build up reliable abstractions to solve higher level problems. The devs on my team that take shortcuts here waste more time in the end.

There is no cost trade-off.


In most cases especially for important code paths I agree.

There is a case where I think it is justifiable to not write a single test: Startups. Specifically pre-seed & seed round funded I think are allowed to skip the majority of tests - however critical paths, especially those that are important to customers (i.e. transactions) must be tested.

By the time you have built out that mvp and have a few customers then you should transition to writing more tests. And as the number of engineers, scope, or complexity grows you need to add tests.


It's testable right up until the point where it's asynchronously interactive.

Would unit tests have avoided the Therac-25 incident?


> Tests aren't there to just be passed, they're to catch problems

So many developers don't understand this simple concept - it manifests in 2 ways: 1. Not writing tests 2. Writing too many / too specific tests

Testing should always be focussed on the OUTCOMES never the implementation. That's why they're so good for making sure edge cases are covered - since we are able to assert the input and expected outcome of the code. I like to use the mental image that in an ideal world I could put the same tests on a completely separate implementation and it would still pass (mocks/stubs, and implementation specific tests don't pass this).

I'm always far more frustrated by 2 than by 1 - since 2 adds so much unnecessary code / complexity that doesn't need to be there, growing technical debt through the tool that should help us manage it. They make changing implementations painful. And worst of all they think they're doing something correctly and when combined with the sunk-cost fallacy they're incredibly resistant to changing these fucked up tests.

Don't get me wrong 1 is annoying too but he'll at least add the tests when you ask him to and not over engineer everything.


There's a lot of room for nuance. If you "just read the fucking file" but the file isn't a "real" configuration file then isn't it just a "mock?" If you replace all network calls with an interceptor that forwards all calls and responses, and just check what's happening as a "listener," aren't you mocking out the network calls to a non-real implementation?

At the end of the day, tests are necessarily a mock-up of what's real. You just happen to disagree with where some people put the abstraction layer. I also would like to make my tests more "real" but I have a lot of sympathy for folks that are trying to test something smaller without involving eg a file. After all, the whole point of "everything is a file" in Unix is that we shouldn't need to worry about this detail, it's an OS concern. If you write to a file that's not actually a file on disk but actually a device, that it should fundamentally be okay and work as expected.


Yeah don't get me wrong, I'm not anti-mock - real code is messy, and the ideal of the same tests running everywhere will never work, so mocks are necessary. But I do think there's a lot more harm from over-mocking, than under-mocking.

> file isn't a "real" configuration file then isn't it just a "mock?"

I want to say "no" but I haven't thought about it enough yet. My reasoning is that the file itself contains information about the expected messages to/from systems, since it is the body of whatever the system should respond to. And while it is only 1 layer separated from just creating the same object in memory for your test this "feels" different because you can't just pull it out of your codebase into curl.


Just to work this out together a little more in discussion form, since I appreciate your attitude:

Consider these two scenarios:

- read "test1-config.json" from disk, into whatever most easy JSON-adjacent format makes sense for your lang

- just use the JSON-adjacent format directly

Isn't the difference between these that one requires coupling input configuration of the environment to the tests (possibly inclusive of env vars and OS concerns around file I/o), making running the tests more confusing/complicated in aggregate, while the other requires coupling input configuration to the tests, making the unit under test clearer but potentially less reflective of the overall system?

Effectively this is just an argument between integration tests and unit tests. Unit testers certainly have the rhetorical upper hand here, but I think the grug-brained developers among us feel that "the whole program should be a pure function."

That can ultimately be reduced to a P-NP problem.


yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

For example if we had a dto that serialises to/from json we should be storing json not creating this dto manually - I would push it further and say any structure which is also easily/transformed from json, like extracting a certain property and using that in the test (although this is also context dependant, for example: if there are other tests using this same file). As a counter example I wouldn't advocate for using json config files to test something completely unrelated to an underlying json structure.

> That can ultimately be reduced to a P-NP problem

Yeah ideally the goal should be to write the simplest code possible, however we get there - shoehorning an approach is always going to add complexity. I think there's a lot of danger from taking rhetoric too far, sometimes we push an abstraction to its limits, when what's really required is a new perspective that works well at these limits.

Effectively I think there's a range in which any argument is applicable, its a matter of assessing if the range is large enough, the rules simple enough, and it solves the actual problem at hand.


> yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

This sounds like a great candidate for the facade pattern: https://en.m.wikipedia.org/wiki/Facade_pattern

Basically you hide dependencies behind a small interface, which lets you swap out implementations more easily. The facade is also part of your codebase rather than an external API, so it gives you something stable to mock. Rather than a building facade like the name is based on, I think of these as a stable foundation of things a module calls out to. Like your code is a box with an interface on one side (what tests and the rest of the codebase interact with) and the facade(s) are on the other side (dependencies/mocks of dependencies).


I have seen over-reliance on the facade pattern devolve into endless indirection that can make the code needlessly confusing. If you are already familiar with the codebase, it doesn't seem like a big deal, but when you onboard, you'll find your new teammate combing through file after file after file just to discover, "oh, there's never any external API call or specific business logic involved, we were just reading a static json file from disk that does not change its content during a single run."

Using the known baked in stdlib functions for standard behavior removes a lot of potential uncertainty from your codebase (while also making it sometimes harder to test).


Yeah, that's called a fucking integration test.


Yeah modeless software is one honking great idea. (RIP Larry Tesler)


You are right. I read their posts as the ramblings of someone who is currently in shock, found a bunch of bad practices in the logging + data retention, and is now just tongue-in-cheek mocking (the puns...) everything even if they don't have much experience with it. I would probably say something similarly incorrect if I found some perl and tried to understand it because I know nothing about writing maintainable perl.


> maintainable perl

isn't that an oxymoron?


Could or should there just be a `IConfigurationService` instead of a separate IConfigurationFileService? Yes, probably.

"Interface all the things" is a bit lazy, but it's easy, especially if you have Moq as a way to auto-mock interfaces and a DI framework to setup factory methods.

But spinning into rage just because you see an interface or abstract factory isn't healthy.


I don't follow .Net closely, but it seems like there should be a better alternative. Java has a library called "Mockito" that can mock classes directly without requiring an interface. I assume something similar exists for .Net, as they have similar capabilities. Making an interface for one class, just so another class can be tested seems like we allow the tool (tests) to determine the architecture of what it is testing. Adding complexity in the name of TDD is a close second on my list of triggers

There's nothing that triggers* me more than seeing an interface that only has one implementation. That's a huge code smell and often a result of pre-mature architecture design in my opinion. It also often leads to complexity where if you have an interface, you create a factory class/method to instantiate a "default" implementation. Fortunately it seems that it is not used as often as before. Our code has no factories and only a few interfaces, that actually have a practical use. The same applied to my previous workplace

* The trigger applies to 2024 Java code written as if it was 2004. I may have a form of PTSD after many years of interfaces and FactoryFactory, but fortunately times have changed. I don't see much of that today except in legacy systems/organizations.


I'm sure the same exists for .NET ( Moq can probably do it? ), but writing against an interface and having concrete implementations supplied by the DI framework is pretty much the ordained way to do things in .NET.

I used to be in the "Interfaces with only a single implementation is a code smell" camp, but I prefer to follow the principle of least surprise, so going with the flow and following the way the MS standards want you to do things makes it easier to onboard developers and get people up to speed with your code base. Save "Do it your own way" for those parts of the system that really requires it.

And technically the auto-generated mock is a second implementation, even if you never see it.


I think you have good approach. I also tend to go with the flow and follow the common practice. If I tried to do "Java in C#", it would make it more difficult to follow my code and decrease maintainability.

I sometimes work on legacy C# code that we inherited from another team and I try to follow the style as close as possible. I just haven't invested enough time to make any informed decisions about how things should be.


You are both over thinking it.

GIT? unit tests? and i thought debuggers spoiled us?

although cavemen-esque in comparison to 'modernity'; it wasn't a nightmare to Pause/resume program flow and carefully distill every suspected-erroneous call to Console.Log(e)/stdout/IO/alert(e)/WriteLine(e); `everything to find the fun/troublesome bits of one's program - instead of a tedious labyrinth of stack traces obfuscating out any useful information, further insulted by nearly un-googable compiler errors.

Tests were commented out functional calls with mock data.

If you never need to instantiate another instance of a structure so much so that it would benefit from an explicit schema for its use - whether it be an object or class inheritance or prototype chain - then sure, optimize it into a byte array, or even a proper Object/struct.

But if it exists / is instantiated once/twice, it is likely to be best optimized as raw variables - short-cutting OOP and it's innate inheritance chain would be wise, as well as limiting possibly OOP overhead, such as garbage collection.

  >interface in C#
Coincidentally, that is where my patience for abstraction for C# had finally diminished.

yield and generators gave off awkward syntatic-over-carmelized sugar smell as well - I saw the need, to compliment namespaces/access modifiers, but felt like a small tailored class would always outweigh the negligible time-save.


I love the idea of syntactic engagement gone too far as "burnt sugar"


Moq® cannot do it. I forked Moq® and made a library that can mock classes: https://github.com/Kuinox/Myna. It can do that by weaving the class you mock at compile time for your mocks (you can still use your class normally).


What’s wrong with having an interface with one implementation ? It’s meant to be extended by code outside the current repo most likely. It’s not a smell in any sense.


In that case you have more than one implementation, or at least a reasonable expectation that it will be used. I don't have a problem with that.

My comment was regarding interfaces used internally within the code, with no expectation of any external use. I wrote from a modern Java perspective, with mockable classes. Apparently interfaces are used by .Net to create mocks in unit tests, which could be a reason to use that approach if that is considered "best practice"


90% of single-implementation interfaces (in Kotlin on Android projects I've seen) are internal (package/module private, more or less.) So no, they are not meant to be extended or substituted, and tests are their only raison d'etre (irony: I've almost never seen any actual tests...) This is insane because there are other tools you can use for testing, like an all-open compiler plugin or testing frameworks that can mock regular classes without issues.

An interface with a single implementation sometimes makes sense, but in the code I've seen, such things are cludges/workarounds for technical limitations that haven't been there for more than a decade already. At least, it looks that way from the perspective of a polyglot programmer who has worked with multiple interface-less OOP languages, from Smalltalk to Python to C++.


Yeah, IConfigurationService implies separation of concern. Code using it doesn't have to care where the configuration came from, just that it is there. Someone separately can write the concrete ConfigurationFileService:IConfigurationService that reads/parses files.

IConfigurationFileService implies abstraction of file system-based configuration. Are we planning that there's going to be a different way to read configuration files in the future, and what exactly is that? If no one can articulate it, it just seems like architecture astronautism and: YAGNI.

IConfigurationService makes writing unit tests for anything that uses it way easier, too. There can be a simple TestConfigurationService:IConfigurationService that just implements everything as settable, and in your test code you can provide exactly the properties you need (and nothing more), and easily have 100 variations of configs to ensure your code is working. Without the headache of dealing with actual files separate from your test code, or worse, shared with other test code.

I've actually written multiple long-lived pieces of software this way, and more than once ended up implementing stuff like environment variable-based configuration, REST API-sourced configuration, and even aggregations that combine multiple sources, eg:

    new AggregateConfig(new ServerConfig("https://whatever"), new EnvironmentConfig(), new FileConfig("/some/path.config"));
All that code that used IConfigurationService is completely untouched and unaware of any of this, letting whoever is doing this as part of changing deployment (or whatever) be productive quickly with very little knowledge of the rest of the (possibly massive) app.


If you need to make your code more baroque and harder to understand in order to unit-test it, that seems like the tail wagging the dog.


Exactly! It's like that Skinner Simpsons meme. Are unit tests the problem and I'm wasting my time? No, it's the config files that are wrong.


> more baroque and harder to understand

I don't understand how this is the case. If anything, an interface is MUCH easier to understand than a variety of functions strung together.

I mean, this is the whole reason we have APIs. If I'm a consumer, I would much rather read and understand the API and its contract than try to read through the code to find out requirements.


We are talking about a single function that possibly takes zero arguments versus an interface (TFA doesn't seem to show the code, but the interface presumably exists for DI).

I have waded through such code in Java, rather than C#. At least some of it is fighting the language; Java is pretty hostile to writing DI style code.

On top of that, even in languages that are more DI friendly, DI significantly decreases the lexical locality of code that is dynamically very close.


I don't think that you necessarily need lexical locality, rather you need specification. And interfaces are just an easy way to make a specification. The problem with "too much" lexical locality is that now you have to search DEEP into stacks to figure out what really going on.

The whole point of specifications is trying to extract the most requirements in the least amount of time. With more "top level" interfaces and tools like DI you can do that, but certainly you can take it too far. A single function, I'd say, is way too far.


Isn't ```dependency injection``` (aka passing arguments) the big thing that's supposed to solve this?

  Config config;
  // production
  config_from_file(&config, "config.json");
  run_production_stuff(&config);
  
  // unit tests
  Config config;
  config_from_memory(&config, &some_test_values);
  run_tests(&config);


Yes, and the typical pattern for .NET DI is to do so with interface based parameters.

So let's say you have a service FooService that requires some configuration.

( Ignoring the System.configuration namespace for now)

You'd have:

    class FooService(IConfigurationService ConfigurationService){
        // Access Configuration Through IConfigurationService
    }

Then elsewhere you'd set up your DI framework to inject your ConfigFileService to satisfy IConfigurationService in prod.

Yes, it can sometimes feel a bit like "turtles all the way down", where sometimes you just wish you had a bunch of concrete implementations.

In unit tests, you'd auto-mock IConfigurationService. For integration tests you might provide a different concrete resolution.

There are some advantages to service based DI though. The standard ASP.NET DI framework makes it trivially easy to configure it as a singleton, or per-request-lifetime, or per-instantiation, without having to manually implement singleton patterns.

This gives you good control over service lifetime.


My example above is terrible, because in reality you'd have another level before this, which sorts out your global configuration, reads it and just injects service specific parameters and configuration for each service.

But I just wanted to illustrate idiomatic .NET DI, and on reflection picking configuration was probably the worst way to illustrate it.


But why is it so hard to read a file during a unit test? Files are pretty easy to mock in many different ways, all of which are pretty fast. You don't need a special-purpose interface to be able to test the code that uses a config file.


Let's say you want to test bootstrapping your system with various configurations.

You could make a few dozen different configuration files. Or maybe it's more than that because you want to test permutations. Now you're maintaining a bestiary.

So instead you think "I'll write code that generates the config file for each test". And that's reasonable sometimes.

On the other hand, the single-responsibility principle can be reasonably applied here and "reading the config data" is a good single responsibility. You don't want to have to write code to muck with json or xml every time some component needs a configuration value. So there should already be a clean boundary here and it often makes sense to test the components separately.

There's not one rule. The article author sounds like an excitable jr engineer that's never written software at scale.


> that's never written software at scale.

Is this like a never version of that insult where people would say someone's opinion doesn't matter because they worked on a project that never shipped (regardless of how much or how little they contributed to the failure)? Just replacing it with an AWS bill-measuring contest?


"Software at scale" is different from "data at scale" is different from "compute at scale".

But yeah, when I hear "STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING JSON FILE YOU ENTERPRISE FUCKERS" I think "developer who's never worked on anything more complicated than a chat app, and isn't old enough to have learned humility yet".


> Now you're maintaining a bestiary.

Any battle-hardened test suite is already a bestiary. Having a subfolder of diverse & exemplary config files, that could be iterated over, is not adding much to the pile.


A totally reasonable approach! Sometimes.


The interface in question is `IConfigurationFileService`. I can only guess at the actual interface, but based on the name it doesn't sound like it's abstracting away the need to put your configuration into files.

Could just be a case of bad naming and it solves everything you're saying. But it sounds like pointless enterprise-y fuckery to me.

I would not say the same thing about `ConfigurationFileService : IConfigurationLoader` or something.


Perhaps a better example is a real world example I ran into just this week.

I found out that our unit test suite would only pass when run under elevated credentials. Our internal developer tooling had been running under semi-privileged credentials for years, and was the usual way of triggering a full unit test suite run, so no-one really noticed that it didn't work when run at a lower elevation.

When run from a lower privilege, a unit test was failing because it was failing to write to a registry key. I first double checked that I wasn't accidentally triggering integration tests, or that the test should be tagged integration.

But no, we had simply failed to abstract away our registry writes within that service. Of course no unit test should be writing to the real registry, but this settings manager was just being new'ed up as a concrete class, and there was no interface for it, and so it was just naively making registry edits.

This settings class wrote directly to the windows registry as it's data-store wasn't noticed as an issue for years, because all the times it had previously been run, it had been under credentials which could access that registry key.

And yes, there are different ways we could have mocked it, but favouring a concrete class meant this registry edit was happening unnoticed across all our unit test runs. And I suspect this might have been behind some of the dreaded flaky test syndrome, "I just tweaked a CSS file, why did my PR build fail?". Because 99% of times it was fast enough that it didn't cause issues, but with just the right concurrency of test execution, and you'd have a problem that wouldn't show up in any meaningful error message, just a failed test "for no reason".

Why shouldn't unit tests read real-world files? Because that introduces brittleness, and an inherent link between tests. If you want fast and easy to parallelize tests they need to have no real-world effects.

A test suite which is effectively pure can be executed orders of magnitude more quickly, and more reliably, than one which depends on:

  - Files

  - DateTime.Now (Anywhere in your code. A DateTimeFactory which you can mock might sound ridiculous, but it's perhaps the best thing you can do for your code if your current code / tests run on real dateTimes. Even for production code, having a DateTimeFactory can be really helpful for relieving some timing issues. )

  - Databases ( This is more "obvious", but still needs to be said! )
And so on. A unit test suite should boil down to essentially pure statements. Given inputs A,B,C, when applying functions f, g, h, then we expect results h(g(f(A,B,C))) to be X.

This can also be the difference between a test taking <1ms and taking <10ms.

As a final point, you're usually not wanting to "test the code that uses a config file", you want to test code which you don't care if it uses a config file.

The "Code that uses a config file" should be your Configurator class. What you actually want to test is some service which actually produces useful output, and that contains business logic.

Yes, "separation of concerns" can be taken too far, but having something else responsible for the manner in which your service is configured so that your service can just take in a business-domain relevant typed settings object is helpful.

As I've said elsewhere, config is actually a terrible example, because it's essentially a solved problem, MS released System.Configuration.ConfigurationManager ( https://www.nuget.org/packages/system.configuration.configur... ), and you should probably use it.

If you're not using that, you ought to have a good excuse. "Legacy" is the usual one of course.


> DateTime.Now

I've got this in several places.

I have code that needs to check if other date is within two weeks of today. I have test data.

I could either modify the test data based on today's date (adding other logic to tests that itself could be faulty), do something while loading the test data... or have the date to be used for comparisons be injected in.

That date is July 1, 2018. It was selected so that I didn't have difficulty with reasoning about the test data and "is this a leap year?" or across a year boundary on January 1.

Its not a "I don't trust it to work across those conditions" but rather a "it is easier to reason about what is 60 days before or after July 1 than 60 days before or after January 1.

And returning to the point - injectable dates for "now" are very useful. Repeatable and reasonable tests save time.


It's not, but maybe I don't want to create a file for the tests. The point theyre trying to make is that it's a personal preference and not an obvious "this way is better"


most of these issues disappear with the introduction of first class functions. There's nothing noble about the thick indirection inherent in old school enterprise programming.


A fist class function is just an interface. Just ask the venerable master Qc Na.


Kind of off topic, but can someone explain why else C# has factories and interfaces? Is it just mocking? I really don't understand the pattern at all. FWIW I am no dev.

EDIT: Found xnorswap's comment below about configuration, which makes sense I get - but as they mentioned, it does feel like "turtles all the way down".


I don't think it's off-topic at all. The ideal this kind of thing is trying to achieve is separation of concern. One developer or team or even entire organization is writing things like serializers for specific kinds of file formats or other sources of persisted data like databases and environment variables or things like the Java Springboot externalized config. Another organization is just trying to create an application that requires configuration. They don't necessarily want to have to worry too much about where it comes from. Especially in places that strictly separate development and operations, they'll probably have no say anyway and it'll change over time and they're not gonna want to change their own code when it does.

You can analogize this to non-software use cases. I've got phillips head and flathead screwdrivers and ideally don't want to have to worry about the specific qualities of a particular kind of screw when selecting one. It either has one slot on the head or two. That's the interface to the screw and it should be the only thing I have to worry about when selecting a screwdriver.

Unfortunately, this kind of thing can balloon out of control, and in the worst kinds of "enterprise" Java shops I was involved in deep into my past, where concrete classes were injected at runtime by xml file loaded into the framework, it was literally impossible to tell what code was going to do simply by reading it, because it is impossible to know what is being injected at runtime except by inspecting it during runtime. It's a pretty frustrating experience when reading an entire code base doesn't tell you what the code actually does.


> it was literally impossible to tell what code was going to do simply by reading it, because it is impossible to know what is being injected at runtime except by inspecting it during runtime. It's a pretty frustrating experience when reading an entire code base doesn't tell you what the code actually does.

I've worked in "legacy" nodejs code since node 0.x. Glad to hear that there might be hope of codebases that don't have this problem. I thought typescript would help, but I've learned that fancy generics can ensure that it's still quite possible to have no idea what something will actually do in a real world environment, you'll just have a lot more cognitive overhead in wondering about it.

To be clear, I love ts and fancy generics that try to impose a Haskell-like determinacy on js Object structure with exclusivity and exception guarantees and all the rest; I just also hate it/them, at the same time.


I've used them in the past to keep interface and implementation separate. It's an easy way to stick an adapter between something concrete and the thing that needs something but doesn't care where it's coming from.

So, for example, I could have a IGadgetStore with methods for creating, retrieving, updating, and deleting gadget instances and then I can have a bunch of different classes implementing that interface. An obvious example is to have a PostgresGadgetStore and a MysqlGadgetstore and CsvFileGadgetStore. If the user wants to implement their own store that I haven't written, they can.


OK that makes sense. As an outsider, the C# code bases I look at seem to do this as standard practice, even if the requirement for different classes never materialises. I guess you get used to looking at it, but it seems (perhaps naively) as wasteful and a bit distracting.


> it seems (perhaps naively) as wasteful and a bit distracting.

It is.

Nowadays, .NET is usually able to do away with the abstraction cost of such interface abuse luckily, but it remains an additional item you mentally have to deal with, which isn't good.

Single-implementation interfaces are still considered an anti-pattern, and teams that over-abstract and mock everything out when writing unit tests usually just waste time in pursuit of multi-decade old cargo cult. These also often test that modules comprise of specific implementations and not whether they simply satisfy the interface contract, which is terrible. And often turn stateless parts of logic that could have lived on some static class into an interface and an implementation injected with DI, that is then mocked out, instead of just calling methods on a class. More difficult to remove, worse locality of behavior, does not answer the question "if tests are green, are we confident this will work in prod?", sadness all around.

I agree with your sentiment. It's much more practical to write functional and component-level tests with coarser granularity of individual test items, but with more extensive coverage of component inputs. There's a wealth of choices for doing this with little effort (e.g. testcontainers).


A non-mocking use of mine: I have a factory with a method that returns instances of a particular interface for publishing typed events to a pub/sub service. The caller of the factory doesn't have to be updated with new concrete types as new events are added, because it's the factory's responsibility to create the events. The event types themselves just implement the interface that's required to serialize them for the pub/sub service.


The thing that should be tested is whatever you are handing to a client app to use. If it is an interface, then test the interface, not the file behind it. If the client will get the file, then test that the file is correct.

So in this case, is this entire file being handed to clients to do what they will with it? Does that make sense as an interface.

If you are building an app, and you want other parts of the app, or clients of the app, to use this data, does it make sense to just hand over the entire file as the interface.

Basically :

Programmer 1: "Hey man, show me your data interface so I can build a quick lookup on something"

Programmer 2: "here is the whole damn file, do whatever you want and leave me alone. Figure it out yourself, and no I'm not going to give you a schema and I'll change it whenever I want".


Your unit tests should just take the result of loading the file as an argument or other type of injection param. Then you can hardcode your unit test config parameters in the test code itself. That's the appropriate place for this kind of indirection.


Why do you need the interface? You can extend/mock the class itself. Refactoring code is easy and cheap. There is no reason for complex abstractions that protect implantation outside of libraries and frameworks.


> You can extend/mock the class itself. Refactoring code is easy and cheap. There is no reason for complex abstractions that protect implantation outside of libraries and frameworks.

"Mock" can be a loaded word in this context, so please excuse me if I'm looking at it through a difference lens, but if you're using some sort of mocking set of tooling (like Jest or similar), I'd argue that those mocks are much more confusing than an interface with an implementation.

I personally love an interface because it defines the most narrow set of operations an object needs to support to be passed in, and the implementation of those are completely irrelevant for calling. In many cases, I personally find that a lot simpler and cleaner to read.


I am lost is that a code comment or the author commenting on something they found?

Because I always enjoy leaked code comments. It’s like “tell me how you really feel about this shitty bloated enterprise framework you are using”.

There were some good ones in the leaked windows source code, weren’t there?


That's the author commenting on something that they found.


I'm sure you've seen this, but "The Simpsons Hit & Run Source Code Comments, Read by Comic Book Guy" (https://www.youtube.com/watch?v=R_b2B5tKBUM) is an all-timer.


Take this as a lesson. If you've been a dev long enough, you've worked on a project knowing that how the project is being done isn't the best method with every intention of going back to make it better later, but not at the expense of getting the MVP up and running. You'll also have seen that never actually happening and all of those bad decisions from the beginning still living all the way to the bitter end.

I'm guessing not one person involved would have ever imagined their code being left on a machine just left out in the open exposed to the public completely abandoned by the company.


They might have had the most perfectly developed decommissioning process. And nobody is going to care when their paychecks stop showing up, and everything suddenly gets trucked-off into receivership.

Given the era and constraints, I don't see how it was irresponsible or 'sloppy' to have a local database on these things. This most likely is not on development.


> and everything suddenly gets trucked-off into receivership.

That's the problem. These things aren't getting collected and trucked off. They are just left rotting in their installed locations. I'm pretty confident that you could just show up to any of these with a tool box to just start opening one up to take out whatever you wanted from the insides, and not one person would question you. They already said they don't care if you never returned any discs you had in your possession, so nobody would care if you emptied their inventory of discs within. And until now, I'd wager not one person inside the company ever thought they might be vulnerable to a PII attack like this either.


The host locations are pissed off that the machines are sitting there taking up space and using electricity. They certainly aren't going to be happy with someone opening it up and making a mess. Or potentially creating some sort of additional liability for them.

But if you show up with a van or a large truck, they'd probably pay you money to take the whole thing off their hands. And you can tear it apart in your own garage.


> taking up space and using electricity.

how hard would it be to unplug the units? if these things are on a shared electrical circuit with anything else in the store, then that's on them. If they are separated, then just flip the breaker. otherwise, there's going to be j-box some where near the thing connected to some conduit. ten gets you twenty that there's some wire nuts in that j-box that could be disconnected in less than a minute.

also, just show up with a clipboard, and if anyone asks you, just say you were hired to collect certain items from within but not remove the entire thing. just print up a fake work order. i don't think i'm thinking too far outside the box on this one.


Theres probably lots of great robot disc handler stuff in those boxes.


There was an article in some source (sorry, I forget which) that interviewed a person somewhere in the Southeast US that has been paid to remove a dozen or two of them. It had some photos of the inside of the machine. You should look for it!


There's a Discord where people are sharing ideas on sourcing and modifying the kiosks. https://discord.gg/ZNXy722W5t


This.

I often wonder what was left behind when the financial crash happened. Watching documentaries and movies about it, it looked me like several large banks and insurers closed up shop overnight, and people were leaving the next day with boxes containing their personal effects.


I think you're right in general -- that is, regardless of the original company's practices, the entity selling off the assets should be required to do that responsibly -- but then:

> the unit I've got an image for has records going back to at least 2015.

Whether or not it's "on development" -- that's sloppy. Like how it would be a problem if your preferred grocery store kept your details on the cash register you checked out through almost ten years ago. It's a point-of-sale terminal holding on to high risk data for no reason.


I recall being surprised to see people using a Redbox in the grocery store. Like, wow, (a) this company still exists, and (b) ppl still watch DVDs. And that was years ago. I think it's not unlikely the company was already in total zombie-mode by 2015.


The end of Chicken Soup for the Soul media (who owned RedBox and Crackle at the end) was a complete shit show. I’d be unsurprised if they just walked away from the DVD boxes leaving the whoever had them on their property with the job of dumping them.

CSS just stopped paying vendors before the Redbox acquisition to make their balance sheet look better then just never paid after that until going bankrupt a year later. (My company was a vendor who had to get our attorneys involved to reclaim some payment prior to their bankruptcy and will never get the rest)

I’ve seen a bunch of these SPAC style (there’s usually some sort of penny stock starting point so the company is publicly traded from the jump) rollups of bankrupt or failing media and entertainment brands over the years and they all blow up.


When a developer says “this is a temporary solution” they mean “this is a temporary solution forever”.


There is nothing more permanent than a temporary solution, and nothing more temporary than a permanent solution.


That's very true.

When a customer has a problem, you create a solution to it. Often the problem is part of a much larger space, so you tend to have discussions of all the possible features you could implement. This is a necessary step to gain knowledge about the problem space, but it can lead you to think that the solution have to cover it all

Time restraints leads to a "temporary" feature to solve the customer's immediate need. Most of the time it turns out that all the other features are neither necessary nor important enough to spend time on.

Projects without time restraints or focus tends to create a complex system that will "solve" the entire problem space. Most of the time it turns out that our assumptions are wrong and we need to do a major refactoring.

"Temporary" solutions may not have the best code structure or architecture, but it is not necessarily a bad technical debt, as many new developers to the project seems to think it is. Also, the time restraints of a temporary solution encourages the developer to create simple code and not "clever" code, because they can always blame time restraints for their lack of cleverness. If the code has been mostly unchanged for years, somewhat maintainable and solves the customer's problems, it's not really a problem.


I agree. Temporary / bandaid solutions are totally great if they are straightforward to implement AND unimplement.


The beta version works fine, lets just keep it.


Never make the "Proof of concept" so good it is actually usable.


Write the proof of concept in a language that the company doesn’t want to support. Erlang, Haskell, Prolog.


Words to be forever-tentatively employed by :)


They mean temporary for them


The good practice of last year is the bad pattern of today.


At least in web development. There's a weird time horizon there, the good patterns from a decade ago are often still good practices.

Picking up the latest and greatest web stack today will likely be a rotting, unused stack next year. Pick up rails or django and it will be largely unchanged and you'll still be able to hire for it.


One just wonders if the cooperative-multitasking BASIC implemented in this machine really was necessary for an MVP. Other than if that just happened to be the sort of programming that the developer at the time was familiar with.

Also, this really is the 'engineering' discipline with the lowest level of craftsmanship and regard for rigor, isn't it? Because the failures are usually intangible, not directly life-threatening.


> One just wonders if the cooperative-multitasking BASIC implemented in this machine really was necessary for an MVP.

I have a feeling that someone already had that hammer ready and was in search of a nail. :)


Might be case of a VB6 programmer who graduated to .Net C# but still misses the good old days.


I worked at RedBox in 2010. C# with embedded Lua for the screens. The intent was to build a flexible architecture for CoinStar to use on many kiosk businesses.

The PII is likely log files that should have been erased nightly, but I don’t remember.

I know the guy that designed the architecture. He’s a friend that I’ve argued with about over-engineering things. He never cared if people understood his work, which is a common theme with old school engineering.


Not an old school engineering. There a fresh engineers rolling of the production line everyday that behaves exactly like this.

I would even venture to say that senior engineers are probably much better at documenting, writing clean code, not over engineer things. My personal experience is that bad engineers move into project management, administration much faster. Which is a problem in itself since they end up being the boss of the engineers that are better than what they are.


I agree on the management aspect, and even promoting good engineers "too high" on the "parallel promotion track" can effectively take them out of the engineering work just as much as if they went into pure management. "Does the CTO still code at least sometimes?" is the only tell I have for whether a company has fallen into that trap or not.

For the engineers themselves, though, I think it's a mixed bag when it comes to "older" ones. That is, correlations on just age in the field are weak. Sometimes they're great, sometimes they're really meh. Whether something is over-engineered or not really is case-by-case, part of the problem is sometimes something looks over-engineered but is just the normal thing to them by now, nothing special, even though its architecture handles a lot of concerns that a more naive and obviously not over engineered style would have produced. In that case, they're validated sooner or later.

Sometimes they're more up to date about various new things than even the energetic young bucks, sometimes they're too stuck in their ways. I've seen examples of both knowing about modern hardware details (and being able to take advantage of them) and having a stale idea of how CPUs work that wasn't even quite accurate in the 70s. (So, not too different from a lot of fresh engineers who get educated on such simple models.) I've noticed no correlation with age on whether someone has completely mistaken ideas about JVM performance.

Being stuck in their ways in particular applies to things beyond the pure code and coding style -- and it's not necessarily a bad thing. If they've managed well so far without pick-any-of good documentation, good debuggers, good source control, good editors, good open source, methods to thoroughly avoid various causes of engineering pain, etc., why bother doing or taking advantage of those things now? And if they're skilled, they might even be right, it's probably best not to disrupt them if they're not a bus factor risk. But if they're more on the meh side of things overall, they can hold things back too much.

(By "older", I mostly mean those who were practicing since the late 90s/early 2000s. But I keep in mind that the global supply doubles every several years or so, so it's quite possible for someone with only around 5 years of experience to be in the top half of seniority already.)


I never said old. I said old school. Age is irrelevant.

Guys that adhere ruthlessly to their own framework designs without regard for onboarding timeframes.

They truly believe if you can’t understand their framework designs, you’re not smart enough.

I’ve always believed a critical aspect of software architecture is making sure the developers that will maintain it are a part of the design. If they’re junior and mid-level, your architecture better accommodate their skills.


> He never cared if people understood his work, which is a common theme with old school engineering.

Not in my experience.


It's not just an 'old school' thing. FFS look at the state of the Javascript ecosystem.


There certainly are spots of over-engineered solutions in today’s world, but overall I do believe things like cloud serverless, and domain-driven design (modularization) have reduced complexity in a natural fashion.

These over-engineered systems usually came out of client server architectures using heavy inheritance object pyramids, which we definitely don’t build as much anymore.


Where does Foone keep finding this stuff?

Earlier, Foone finds a NUC:

https://news.ycombinator.com/item?id=41294585


I know an employee in an IT company. He told me that they have hundreds of decommissioned laptops and no time to wipe them. And they won't pay someone else to do it because it's too expensive. So right now they are in storage. If they go bust, the storage company will likely dump them.

I've seen a lot of stuff in e-waste. There are several facilities within 5 miles of my home, and you can walk right in and drop off your old TVs, toasters and laptops in large metal bins. And if you have a quiet word with the attendant you can usually walk off with stuff that someone else dropped off. "Good for the environment mate! I can use this for parts!"

If social engineering works at banks, you can be damn sure it works at an e-waste facility. And if that fails, a few bank notes help.

I don't do this but I have intercepted e-waste coming from family and friends. In one case I found a treasure trove of photos from my deceased sister. Nobody had ever seen them before. I also found personal documents, internet history and saved passwords in the browser, which got me into her iCloud account which, until then, nobody could access. This lead to more photos and documents. And it was all destined for e-waste.


> I've seen a lot of stuff in e-waste. [...] if you have a quiet word with the attendant you can usually walk off with stuff that someone else dropped off.

In my country, there are specialist e-waste disposal companies large IT organisations can hire, which guarantee to remove and shred the hard drives before recycling the rest.


I don't think that the people responsible for dealing with the mess left from bankruptcy are large IT organizations, they probably barely have the money to transport the waste to the nearest trash dump, unsorted, and forget about "recycling" now that poor countries are less willing to take the e-waste.

Companies leaving their waste as someone else's problem to deal with is a common occurrence (heck, even before they go bankrupt), and in many cases they would never have existed in the first place if they had to pay for all of it, for instance :

https://www.desmog.com/2019/12/20/fracking-oil-gas-bankruptc...

(Note also that there are funds set aside for cleanup, but they are typically woefully insufficient.)

How to prevent companies that are a net negative for society from existing remains an unsolved problem.

(Aside of course from disallowing companies altogether, I wonder when is the point that the limited liability company as a concept is a net negative to society ? Could it be in the past already ? Of course comes the question of whether any alternative can have a positive contribution, in a context of overshoot...)


"they won't pay someone else to do it because it's too expensive"

This is the relevant bit. It's cheaper just to pile the old computers in storage vs doing something about it.


>> And they won't pay someone else to do it because it's too expensive.

Its not that such entities that will correctly handle the e-waste don't exist, its that the company doesn't want to pay to have it donem esp when storage is cheap, let those containers of old laptops and towers be someone else's (budgetary) problem.


> guarantee to remove and shred the hard drives

Now that storage is often an indistinct chip on the motherboard, I wonder how that works.


About as well as it always did, which was some variation between “they shredded it whilst I watched” and “they probably wiped it before letting someone take it home” to “it shows up on eBay in three days untouched.”


Hopefully that company had enabled FDE on those laptops. (It still would be prudent to wipe them before recycling, of course.)


I am loosely aware of a time that a local bank performed a large data centre migration.

They built new infrastructure at the new site, then just decommed the old building.

Pretty standard stuff, but the old building was like, 1930s era or something. After the hardware was deracked, it was left in a loading dock, in a now abandoned building, behind an unlocked roller door for like 6 months before it was recovered for data destruction.


I know someone who worked at a recycler who was paid to physically destroy and certify media as destroyed. It took time and money to destroy media so it just sat in a warehouse for months and the paperwork was lies. Eventually they went bankrupt and another company bought the stock and sold it on eBay by the palette load as is.

The only companies that do a proper job are the ones that turn up to your office and shred the hardware in front of you. Paperwork is worth shit otherwise.


I once worked on an ewaste shipment from a rail carrier.

99% of the stuff they sent us was boring corporate desktops running standard, secure soe.

1 laptop however, booted into windows with saved credentials, connected to a vpn automatically, logged into the rail companys in house software automatically, and began displaying what I can only assume was a live map of the rail network. Little green lines running along red lines, the green lines would come to junctions and stop. It had buttons. I think I shucked the hard drive and drilled it within 60 seconds and had the hardware completely disintegrated.

One other time a games company that got liquidated sent us a pallet of computers that had source code for a relatively popular strategy game on the drives.

Another games company shipped us their test kits one of which had a dev build of a relatively well known action adventure game. The warehouse guys would play the dev build of the game on their lunch breaks.

Basically no one gives a shit.

All of that before I worked for a business that stored their customer info including credit cards in plain text one dirwalk away on their public website. When we complained we were told that it was fine because they "encrypted" the card numbers. The encryption was adding 1 to the card number. I died.


I hope someone backed all of these up, they sound like a goldmine



In this specific case, they started digging through a HDD image that someone else pulled. They're still trying to get hold of an actual Redbox machine to investigate the hardware as well.


> JUST READ THE FUCKING JSON FILE YOU ENTERPRISE FUCKERS

On a codebase I worked on, accepting an expanded range of data in a field required modifying seven files in three repositories and two (or three) programming languages. Someone thought that's the right way of doing it.


I would to see people start engineering instead of overengineering, being scared of such exposure happening to them. The only thing I can’t disagree more - it’s not the graduates who write ServiceFactoryBuilder where I work at. Those are guys after 12-15 years, started with bad PHP, now being "Senior Java", but they didn’t learn much since beginning. This is how corporate software looks like.


I feel like if we stopped trying to reinvent the wheel, and senior devs had 12-15 to master one version of the wheel, things would be much better.


"1 year of experience, 15 times"


Are there any laws in any country governing how such equipment is supposed to be decommissioned in case if bankruptcy?

This does not seem to be an isolated case, it just happening more and more with advance of technology.


What are they going to do? Sue the bankrupted company?


If instead of a load of old computers the bankrupt company had a pile of old tyres, I'd expect the receivers winding up the bankrupt company would dispose of the old tyres properly, paid for with the company's assets before anything is distributed to creditors.


If someone creates this fucked up compute stack and handles the customer data this poorly, what makes you think they would be any better at disposal even if there was some laws around it? It is not like law makers have any idea either what is going on.


They are not the company under discussion.


If a company dealing in toxic chemicals goes bankrupt, is it functionally legal to just dump them in the nearby river? I’d be amazed if countries don’t have legal processes in place to deal with situations like this and maybe the courts haven’t caught up to this use case?


We need a Superfund for data spills.


I think there's supposed to be an escrow account where you say like "I'm going to handle X amount of petrol / nuclear material, here's a big pile of cash set aside for cleanup if I dissolve"

One could do the same for pii. Of course it's cheaper not to, so I'm not sure if anyone actually does this kind of insurance policy


I think historically in the US that's exactly how it's been done, and then we just clean it up later (or never). In the meantime a bunch of people get sick


I don’t know the US system but in the UK you have two basic routes to running a business: sole trader or LLC.

In the former, short of dying, it’s not possible to “disappear,” because you’re personally liable for everything. I’d guess that includes data breaches.

In the latter, it’s also not really possible because the company must be wound up if it goes bankrupt. In the worst case, the government appoints administrators to wind the company up, who would be responsible for handling the assets.

Now, if the processes aren’t up to scratch, maybe that’s a thing that needs to be fixed, but the structure is there to do it. At least in the UK.


When a company goes bankrupt, a curator comes in who oversees the decomissioning of the company and its assets; I don't know if a curator is a government agent or a private company (probably either), but they become responsible for the company and in this case the customer data.


Or perhaps make sure it gets sanitised before refurbished and resold. I mean bankruptcy always means debt collector tries to sale assets to cover for losses, no?

But this does not answer my question. So basically many people here are unaware where such legal framework exist at all.


It would require setting aside some amount in escrow to be used in proper asset sanitation in case of bankruptcy, or at least sanitation getting some priority in asset liquidation proceeds.


Any C# devs wanna explain the XML thing? To me having a separate class to deserialize each kind of XML document to its respective object seems nice and the "right" way to use XML. The class just becomes the config file. Generic loaders but poking at the tree is super brittle. Does C# come with magic to do this better?

Because if you have to invent the config file then isn't that creating a DSL and we're back to over engineering?


Active C# dev here, but I haven’t read the article.

For configuration these days XML is generally not used, they have a configuration system which can use a variety of underlying sources (like environment variables and JSON files) and you can either access these settings by key from a dictionary or trivially hydrate a plain old C# classes, including with collections.

People may still manually read their own configuration independent of this system or perhaps they’re just generally deserialising XML.

There are (I think) at least a few ways to work with XML in .NET.

For well known schemas I definitely generally recommend the C# class approach where you somewhat simply deserialize a file into well typed C# classes.

From your question it sounds like the XML API which allows you to arbitrarily query or manipulate XML directly was used here. I have on occasion used this when I don’t have the full schema for the XML available and need to tweak a single element or search for something with XQuery. It’s a useful tool for some scenarios, but a poor choice for others.


System.Xml has been around for a very long time. I use it in one of my tools as a MSN deserialization class to make all my old MSN history "readable" by traversing the nodes and picking out sending/receiving user, timestamp, and message.


> To me having a separate class to deserialize each kind of XML document to its respective object

You mean a separate function? Or a separated method in some class? (Given that C# can't do top level functions...)

Or else, you may want a type parametrized function that uses reflection to populate an object of any class you want. That would depend on how many different kinds of file you have, but the C# standard library has this function so you just use it.


I stopped reading when it got all shouty, and a quick scan doesn't show whether it's got actual sample code.

It's also been a long while since I wrote any C# for real, and even longer since I had to deal with .NET's XmlSerializer.

It used to be a pain in the backside to deal with XML in .NET if you wanted to just deserialise a file to an object. You'd need to mark up the class with attributes and stuff.

I remember being very happy when JSON.NET came out and we could just point at just about any standard object and JSON.NET would figure it out without any extra attributes.

I'm not sure if XmlSerializer ever caught up in that regard.

e: Also, without seeing the code and where else it might be deployed it's not easy to know whether this is a case of over-engineering.

This same code might be deployed to a phone, webserver, lambda, whatever - so having an IConfigFileLoader might make 100% sense if you need to have another implementation that'll load from some other source.


> I'm not sure if XmlSerializer ever caught up in that regard.

Not really. The one thing that exists to make this easier is xsd.exe, which can generate the code for these classes from and XML Schema file.

https://learn.microsoft.com/en-us/dotnet/standard/serializat...


Ah, yeah, xsd.exe - I had successfully wiped that from my memory.

I had a ton of scripts that generated classes from XML and SQL Tables. I forget exactly how.


Ahh I miss Foone's threads on Twitter/X!!! They were always such an interesting joy to read. I know they migrated off of Twitter/X and they were one of the ones I was most sad to see leave. But given 95-99% of the accounts I follow never left, I'm not going to bother to cultivate another social media account just for Foone's threads, or whomever else's which I've forgotten left Twitter/X.


I am so glad people are leaving Twitter even if they don't do anything else, and maybe especially lol


Sounds like most idiomatic C# code bases i saw. Words cant express how happy i am to be outside that bubble


I just watched a video earlier today about how if you find a working redbox, you can get movies out of it without getting charged, although you still have to enter your payment info. This prospect suddenly sounds even less appealing.

https://www.youtube.com/watch?v=ucsYziSl7xk


Seen what the rentals are on a Redbox?

Every time I would go browse their movie collection I always walked away uninterested in anything. Most of it was what I call direct-to-inflight-video.


Judging by the movies I've seen on international flights in the past few years, I think this is really unfair to inflight video. (i.e., the available movies were all highly-rated theater movies, not direct-to-video garbage.)

Perhaps a better description would be "even worse than Netflix originals"...


I question if this has ever been true outside of old timey planes of decades back where everyone had to watch the same movie. If you were on a decent international flight you could sometimes even see a movie still in theaters.

The only real regression I've seen in my life time in in-flight entertainment is ANA removing their partnership with Nintendo to run SNES emulators to all the seats. A 14 hour flight to Narita used to involve Super Mario All Stars


I didn't bring 5 dollars cash the first time I took a plane to SF from the east coast. You end up watching a movie on a projector screen without sound. I end up reading through Microsoft foundation classes books. On the way back I had my 5 dollars ready.

The movie was about a brother who returns to a small town to visit his sister in a southern town. He ends up staying and helping her with the kids. But his irresponsible ways causes frustration with the sister and the kids he is watching. He leaves. End of movie.

Truly awful. I wonder what the first movie was like.


I'm 'kinda curious to know which movie is this.


I've been trying to figure it out for awhile.

It would be in the early 2000s. Let me try AI. Found it. What an age we live in.

You Can Count on Me - 2000 Sammy is a single mother who is extremely protective of her 8-year old son. She is satisfied with living in the small town she grew up in and working in a local bank. When her brother Terry visits he fits the void in the life of both her and her son. Temporarily free of the constraints of single motherhood she begins to break free of her normal routine. In a string of traumatic events Sammy is torn between helping her brother and her maternal instinct to protect her son from getting hurt.

95% rotten tomato score. Someone liked it.


I did find this while looking for it the other day: https://en.wikipedia.org/wiki/Uncle_Buck


As an aside, I see we've re-invented Twitter-style blog posts - but did anyone stop and ask why?

This format is so tedious to read - one sentence fragment at a time. It's like reading someone's subconscious inner-dialog shower thoughts.


Sort of the other way, many social networks had longer formats and Twitter eventually adopted it. On twitter it actually never quite made sense but on fedi stuff it works a lot better.


Thats terrible. User doesn't say how they got the machine. Maybe a bankruptcy liquidation auction.

Sidenote: As a personal mental quirk, does anyone else ever accidentally read "Redbox" as "Roblox" and then have to double-check?


I love how the all-caps disgust is reserved for the greatest sin, the overengineering.


I once worked for an MSP that would reuse decommissioned storage for other customers and services.

Tried to convince them to stop, and even setup a service on lunch breaks to attempt and sanitise them using open guidelines available at that time - an automated service that would display drive sanitisation status on a text based panel next to a multi disk bay caddy in the storage room.

I resigned because of this, and many other similar business practices - all just to make more money (it was not malicious otherwise).

The business is rather successful to this day. I only hope they have stopped such activities.

And yet I still lose sleep about this..


I mean that’s scary, but at this point is there anybody to hold accountable?


Looks like the mastodon instance got the HN hug of death, anybody archived it?


[flagged]


Please don't cross into personal attack on HN, no matter how annoying you find someone else's writing.

https://news.ycombinator.com/newsguidelines.html


[flagged]


This is just wish.com ChatGPT


[flagged]


Everyone who posts obviously AI-generated content on hacker news gets heavily downvoted and/or flag killed. Without exception. Why not take a hint and stop posting it?


Noone likes your drivel generator. Stop posting that shit here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: