Hacker News new | past | comments | ask | show | jobs | submit | cracauer's comments login

Tablets should have a HDMI/Displayport in so that you can directly use them as displays.


I'd even go one step further: we should have had a standard communications protocol like TCP for all devices. So a display would show up as just another device that we could use to read/write bytes. All devices would have a standard queryable HTTP/HATEOAS self-documenting interface. And HDMI/DisplayPort or USB A/B/C/.../Z would all use the same protocol as gigabit ethernet or Thunderbolt or anything else, so the bandwidth would determine maximum frame rate at an arbitrary resolution. We could query a device's interface metadata and get/send an array of bytes to a display or a printer or a storage device, the only difference would be the header's front matter. And we could download image and video files directly from cameras and scanners as if they were a folder of documents on a web server, no vendor drivers needed.

There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn. And web standards fell to design-by-committee so there was never any hope of unifying these things.

Is it a conspiracy theory when we live under these unfortunate eventualities? I don't know, but I see it everywhere. Nearly every device in existence irks the engineer in me. Smartphones and tablets are just the ultimate expression of commodified proprietary consumerist thinking.


In fairness, there are standardised protocols for a lot of these things already, even if they're not all part of one giant meta-protocol. Cameras in particular have mostly appeared as a folder full of files, with no need for special drivers, for something like 20 years.

There's definitely no need to invoke a conspiracy for the lack of 'one protocol to rule them all'. It's often hard agreeing on a standard even for a relatively limited topic - trying to agree on one for all electronic communications for all devices is probably impossible.


The meta protocol exists! Sort of. Check out the USB-C specs, which tried to answer a ton of this. It’s taken years for power delivery to reach the point where I don’t feel compelled to carry a USB-C power meter to check cables and chargers in the wild. My Switch still requires some out of spec signaling to charge/dock properly.

Meanwhile, half of the stuff I get off AliExpress only charges from A to C cables due to a missing resistor.

I don’t think the markets (yet) incentivize implementations. Like how when my mortgage gets resold, autopay will only transfer over if it’s once a month; anything more complex and I have to endure a new account setup and a ton of phone trees. Same with paperless settings. The result? I just live with the MVP.


> There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn.

On the contrary, Microsoft tried really hard with UPnP/PnP-X/DPWS/Rally/Miracast*/etc but nobody was interested.

*BTW any Windows 10+ device can act as a Miracast sink (screen) so you can link Windows laptops/tablets as extra screens without any additional software.


Extending your simile, some devices need the equivalent of UDP in order to function within the size/power envelopes that make them useful. Bluetooth vs the nRF24L01+.

There are standards like this in highly interoperable systems, but there’s a cost paid. USB-C power delivery negotiation (beyond the very basic 5V3A resistor that people omit) is roughly as complicated as gigabit ethernet. That compute has to come from somewhere and it turns out customers won’t even pay for that 5V3A resistor - they’ll just use A to C cables and replace it when it “won’t charge” from a compliant charger. :) Average person probably only cares that USB-C can be flipped and that the connector feels less brittle than microUSB.

UPnP exists. Lots of what you describe exists. Between bugs in implementations becoming canon and a lack of consumer interest, no real conspiracy required. At least smartphones and tablets are trending in a good direction - Apple’s latest supports basic off the shelf USB-C Ethernet, displays, hubs, and so on.


Agreed in general. However, I wouldn't stop anyone but having my monitor traffic go over the network would lead to a lot of congestion, especially wireless. Prefer a separate cable as the grandparent alluded.


You can plug a USB HDMI capture dongle into tablets and do this.

Any webcam viewer would probably work to view it, though there's dedicated apps intended for this like https://orion.tube/ on iPad. I know there's options on Android but don't have a modern android tablet to test them.


Do you know how come that app doesn’t work on the IPhone 15 Pro?

I don’t have the iPad, but just recently got the 15 Pro, and it’s able to do a bunch of things via the usbc port (wired Ethernet, SD card reading, driving a Pro Display XDR etc), but I wasn’t able to do something like that Orion app is showing.

Was thinking of pretty much same use case as shown in the app, where I could plug in an external camera and use the phone as a high resolution / high-nit viewer display. Are these apis only for iPadOS because the iPhones are missing some required hardware for it?


I know, I'd love to use my phone as a display via capture card so I don't have to carry a portable monitor to troubleshoot headless boxes.

The developer says the 15 and 15 Pro are only missing software, the hardware is capable:

> I’m sad to say that we’ve confirmed with Apple that it will not be working with the iPhone 15. But this can be fixed in software, so feel free to file a feedback request for UVC support on iOS!

https://old.reddit.com/r/apple/comments/16qzdtx/hi_reddit_we...


Ahh, that sucks, hopefully future iOS will also have uvc support.


C++ finally catches up to Perl :-)


The FreeBSD experience with the upgrade also shows that this is not a smooth enforced upgrade.


This is bad in one aspect:

Google pays Mozilla a lot of money, mainly to keep Google search the default search engine in Firefox.

If that turns out to be illegal it could create a financial crisis for the Firefox browser and hence reduce diversity in web browsers.


this is true. but wouldn't it be nice everything didn't have to flow through google and could exist on its own merits?

infrastructure like browsers should really be neutral ground - its sad we can't figure out a way to fund things like that


It would be nice, but it's not realistic. If Google's revenue disappears, then unless Mozilla finds some other form of revenue (which they've been trying to do for years now), Firefox is done for.


At the same time though, when's the last time any of us donated to open source?

I can count myself within the last month... but many will admit that they have never run "npm fund" once.


FWIW, you can't donate to Firefox development.


Mozilla could easily live without Google's money if the vast majority of their income wasn't spent on being Big Nonprofit ( https://news.ycombinator.com/item?id=37180480 )

I would love it if they lost Google's money and trimmed the bile and focused on making great tech. But something tells me the first to be let go will be the techies, making Firefox effectively a maintenance-only browser. And the army of useless "evangelists" will be there until Mozilla collapses under its own weight.


Maybe we should have a reality check on the true cost of tech. Anything that can break the behavior of the tech industry due to cheap money is a good thing for the long term health of the industry.


Only problem is that there's no good alternative for them. You'd be committing Firefox to a slow death


And maybe that's OK and we should learn to live within our means


Is there a chance that Microsoft can make an offer to Mozilla about making bing the default search in this case. I don't know if this would make sense from Microsoft business point of view and probably an evil from internet freedom ans diversity point. But maybe it is better than the current status.


Another take might be that Google's financial clout killed the browser market. That whole $xx.xx CPM thing that only they attain, and others basically pick up the crumbs. Search terms as input to ads are very powerful.


Might be a bit contrarian, but so what?

Mozilla hasn't produced a decent browser for, what, 12 years? They instead take their hundreds of millions of dollars annually to instead spend time building junk like half-baked password managers.

I'd argue that Mozilla's mishandling of Firefox has been killing innovation in this space for years -- it is a giant red flag for anyone wanting to enter the space considering Mozilla's budget and still not being able to produce something of value. The reality is moreso that Mozilla itself doesn't care about building a better browser. It's only when people started to realise this in the last couple of years that we've started to see some new contenders.

Maybe when Mozilla dies, we might start to see open source efforts going to better browsers.


I actually think Firefox is better than chrome and use it as my main browser. Not sure why you say it’s not decent.

I also think they do a lot of great stuff around Firefox, the email spoofing is good, the sync function is good, their podcasts and studies are good. Not sure what people mean here.


This is a strong but valid take, Mozilla has a huge focus on getting revenue streams from somewhere, but flubbing it right and left along the way. The consumer might benefit from a better browser but Mozilla has basically decided that that’s not what will drive growth or revenues… not that I think they’re particularly correct about that


Idk, I think people underestimate what it takes to support a browser. It's a lot of money and Mozilla doesn't have an ad empire to leverage, operating system sales to subsidize from, or phone/computer sales.

All they have is the revenue they make from Google, a little bit from their attempts at revenue diversification, and a little bit from their partnerships (like Pocket). If the Google revenue disappears, they won't even be able to maintain the current level of quality.


This to me is entirely fantasy. We are instead overestimating what it takes to support a browser because of how much money Mozilla gets and still can't produce anything. We're talking amounts in the billions of dollars here, and more than a billion in current cash reserves.

This Google-controlled narrative is pushed out onto a lot of things (i.e search) which people are slowly starting to realise isn't that complex or expensive to build after all (see Brave Search for example). It's not surprising that Mozilla would echo similar sentiments considering the entire company is controlled opposition.


What other similarly sized or smaller company has successfully built a browser which is not Chromium-based fit for the average consumer? If you're just piggybacking on Google's work, then sure, it's probably not bad, but Mozilla has their own browser engine and is trying to keep feature parity with browsers which have effectively unlimited funding.


Well yes it’s clearly expensive to support a browser and that’s why they’re trying to make money, but they’re failing miserably to deliver on those other revenue streams and they aren’t keeping up in the browser space. That’s all, if they were developing some business model that worked that would be different. That’s why breaking off googles monopoly would kill them, they are dependent on it


If Firefox loses a lot of funding but Google loses the ability to pay companies to make Chrome the default on nearly all devices sold globally today, Firefox will be in much better shape than it is now. Google is killing it, and also giving it a few dollars.


how would the loss of funding put Firefox in better shape? Are you saying that exposure would mean dollars would flow in from donations from new users?


I think they are saying that Firefox would be in a better position to increase market share.

I am not sure that I agree. Specifically, I doubt that mozilla would be able to realize any revenue even if every OEM made Firefox the default browser.


ah thanks for that, Im was wondering how it makes their financial position better. Losing this case imo would knock them out completely


I'm saying if Google can't throw billions at other companies to push Chrome over Firefox, Firefox can get those users for pennies on the dollar, or even be considered the free better choice for product vendors to include.


I think you're misunderstanding....

If Android OEM's aren't locked into Chrome, they'll probably request payment from Edge, Brave and other startup browsers to be the default. Whoever pays most will get the spot. And that browser will end up being ad-ridden to help pay for that.

I could totally imagine browsers having ad-blockers that don't block ads, but merely replace "bad" website-provided ads with browser-provided "good" ads.


Even with tons of new users, how would Mozilla be able to fund continued development of Firefox?

Mozilla will need to find a way to raise funds.


Disappointing that this test doesn't report on how well suspend/resume works.


OP here. Suspend/resume worked flawlessly with Archlinux and Kernel 6.4


Correction, the Kernel was newer. 6.5.x. Here is an old probe https://linux-hardware.org/?probe=88cfdb061d


The OP could solve most of those problems by switching back to FreeBSD.


I would like to have a resource like this, but instead of the PoC I want to see the diff that fixed the flaw in the software.

Anything like that around? I know it isn't trivial.


I could see how to do this for some projects, like Django: get the list of their security updates. For each release, it lists the CVEs it fixes and the patch. The patch gives you the fix diff.

https://docs.djangoproject.com/en/4.0/releases/security/


Planning to do some ML training?


You can have type checking in Common Lisp:

https://medium.com/@MartinCracauer/static-type-checking-in-t...


I was slightly surprised to learn how well Common Lisp had implemented its types. I keep wondering why CL almost completely failed to break into the minds of people in early 2000s. That was about the time I first learned about python, which kind of seemed to be everywhere. It took 5-10 years before I even heard of Common Lisp.

And now it seems to me that the Common Lisp which was pretty much fixed in 90s is superior in many ways (runtime, programming environments, typing -- to mention a few) to even the revised python3 of 2021. And then Javascript, essentially a bad clone of Lisp, got popular? Makes absolutely no sense.


I’m also surprised by this and I continue to love Common Lisp over all other languages. But I think one reason is that languages aren’t just languages. The amazing parts of Common Lisp were all standardized in 1992 or whatever. By people who are not at all involved in any kind of Unix, Linux, open source, web, scripting, etc. It’s like a beautiful cultural legacy that’s maintained by some enthusiasts and a couple of insular commercial vendors. Now what I really wonder is why nobody outside of some Scheme dialects have stolen the restartable condition system, which is so amazing and straightforward to implement.


> By people who are not at all involved in any kind of Unix, Linux, open source, web, scripting, etc.

Common Lisp on UNIX appeared in the mid 80s, long before the ANSI CL standard.

Scott Fahlman was one of the five designers of the early Common Lisp. He headed the CMU CL project, which was a) on UNIX since around 1984 and b) public domain. Code from CMU CL was used in a dozen other implementations.

Other well known CL implementations for UNIX which were developed before 1994, when the ANSI CL standard was published: Allegro CL, GNU CLISP (free), (A)KCL (no cost, free, later renamed to GNU Common Lisp), LispWorks, Lucid CL, ...

Three large commercial implementations of CL were developed initially exclusively for UNIX and were available end 80s: Allegro CL, Lucid CL and LispWorks.

Generally the language came out of well funded research labs and companies and was designed to be portable across a large variety of operating systems (like UNIX variants, VMS, LISPMs, DOS/Windows, Mac OS, ...).


First issue : Access for simple project …

Ex a simple programming web site: if you want to use your iphone, working copy on you Gh-pages, edit test and publish a web pages with js,css&html —- you can. With full version control and even use library external if you have to.

Second issue : external integration which python as script and c (and c++) as os level “script” …

Third issue : hard to use it’s package system.

I am not anti-lisp, just spend $700 to get a Casio ai-1000 and trying to use ulisp.

Just not main stream like.

God’s programming language as said not used God and by mortal.


> Makes absolutely no sense.

Well, for me, it's just not ergonomic. Unlike something like Python.

I solved this year's Advent of Code in Common Lisp in an attempt to learn it better. I determined in the process that the language was awful by 2021 standards and if you wanted a Lisp that was actually usable, go with Clojure or a decent Scheme.


I can see how Python is clearly more ergonomic than Common Lisp, but I really don't see any significant differences between CL, Clojure and Schemes. Just ergonomic micro-optimizations.

Clojure's native thread-safe data structures are a significant difference, though.


My biggest issues with Common Lisp were:

* Absolutely nothing is consistent. When you mutate something, is the place it goes the first or last argument? No consistency here. * When you pass a value to a function, is it by value or by reference? Who knows? Rules are non obvious, do not follow the principle of least surprise. * Lisp-2 just makes working with higher order functions obnoxious.

One thing it has going for it though is the loop macro, that's admittedly pretty neat.


There is no "by reference" in Common Lisp; everything is a value. Some values have reference semantics. This makes no difference unless you're mutating, or making unwarranted assumptions about the eq function.

To understand most code, you can just pretend that all values have reference semantics. If mutation is going on and/or the eq function is being used, you have to prick up your ears and pay attention to that detail.


> Some values have reference semantics. This makes no difference unless you're mutating, or making unwarranted assumptions about the eq function.

That's pretty damn far from "no difference"! Once again, rules are non obvious, do not follow the principle of least surprise.


If you're mutating any object, it is necessarily a value with reference semantics, period. Objects that do not have reference semantics are immutable.

Some objects that cannot be mutated (like numbers) can have reference or value semantics depending on how they are implemented. For instance, a bignum integer always has reference semantics. Small integers usually have value semantics: they fit into a machine word with no heap part. In that case, all instances of the number 0 or 1, and some range beyond that in both directions, will always be the same object according to eq.

If you're mutating an object (and, thus, something that has reference semantics), the difference that the reference semantics makes is that other parts of your program may hold a reference to that object; your code has not received a copy. If you haven't accounted for that, you probably have a bug.

Sure, this stuff isn't obvious; unless you already know another dynamic language like Ruby, Javascript, Python, ...

Complete neophytes have to be taught it from the fundamentals.


No, there really is no consistency to Common Lisp's mutation.

  $ sbcl
  * (defvar a (list))
  A
  * a
  NIL
  * (defun x (v) (push 'b v) v)
  X
  * (x a)
  (B)
  * a
  NIL
Now if you were to do similar with something like an array, the original variable would be mutated. Just another example of how Common Lisp doesn't have any sort of internal consistency. Once again, rules are non obvious, do not follow the principle of least surprise.


> Now if you were to do similar with something like an array, the original variable would be mutated.

That is false. To do a similar thing with an array, we need a non-mutating operation which returns a new array which is like an existing array, but with an element prepended.

Then we need a macro to mutate a place to replace an existing array in that place with a new such an array.

Then, exactly the same kind of behavior will be reproduced:

  (defun array-cons (obj array)
    (let ((new-array (make-array (list (1+ (length array))))))
      (replace new-array array :start1 1)
      (setf (aref new-array 0) obj)
      new-array))

  (defmacro apush (val array-var)
    (assert (symbolp array-var) (array-var)
            "fixme: simple implementation: ~a must be symbol")
    `(setf ,array-var (array-cons ,val ,array-var)))


  [1]> (defvar a #())
  A
  [2]> (defun x (v) (apush 'b v) v)
  X
  [3]> (x a)
  #(B)
  [4]> a
  #()
What? Of course; we are not mutating any object here, but a variable: the local variable of x.

  [5]> (apush 1 a)
  #(1)
  [6]> (apush 2 a)
  #(2 1)
  [7]> (apush 3 a)
  #(3 2 1)
Lists work this way because they are made of cells, and those cells are immutable (if you want them to be) for very good reasons. This is part of the essence of Lisp since the dawn of the language.

It makes less sense to treat arrays that way. It's possible, but you need an exotic data structure to do it even halfway efficiently; that structure will never be as efficient as an ordinary mutable array for ordinary array work.

Whereas, treating singly linked lists this way is almost free of additional cost.


> That is false.

No, it's not. Notice how, in my example, the resultant list is updated in the function parameter, but not the initial var defined by defvar. Whereas if I made an array via (make-array), passed it into the function, and updated that by the way the language documentation tells you to (setf and one of the aref functions), you'd end up with both the function parameter and initial var both pointing to the updated value. These are two logically different behaviors! And that's exactly what my criticism stated: "When you pass a value to a function, is it by value or by reference? Who knows? Rules are non obvious, do not follow the principle of least surprise."

> Lists work this way because... It makes less sense to treat arrays that way.

Yes, that's exactly the point. The language is inconsistent and does not follow the principle of least surprise.


> When you pass a value to a function, is it by value or by reference? Who knows?

Value. Value, value, always value. However, it seems that maybe you don't understand exactly what the value is that you're passing.

> Notice how, in my example, the resultant list is updated in the function parameter

No [0]. It makes a new list (really a new cons cell, which contains the new item and then points to the old list [1]), and assigns that to v. And your example doesn't actually show this update happening, it just shows the return value of push (it so happens that it is updated, however). The original list --- passed in or otherwise --- is never changed. You say that you're "updating" a list, but you're not mutating or updating your data structure at all --- you're making something new, and assigning that to v.

> These are two logically different behaviors!

They are two logically different operations. Are you sure that you understand the data structures you're working with, and the operators that you're calling on them? Did you perhaps try to map concepts from another language into Common Lisp, find functionality that looked similar on the surface, then become surprised when the results were not identical?

Linked lists differ fundamentally in structure from arrays, and so the operators which are commonly used with them differ in turn. Perhaps you would like to compare arrays to vectors, as a closer approximation in data structures, with similar typical strategies of manipulation?

> does not follow the principle of least surprise.

You keep saying this, as if that is that. But nothing about your example is surprising to me, so I suppose this is a matter of perspective.

[0] http://clhs.lisp.se/Body/m_push.htm

[1] https://en.wikipedia.org/wiki/Linked_list


> Value. Value, value, always value. However, it seems that maybe you don't understand exactly what the value is that you're passing.

This is a distinction without a difference.

> ...you're making something new, and assigning that to v.

Yes, I know that. The point is that the behavior seen by the user for similar operations is entirely different between lists and other pieces of Common Lisp.

> They are two logically different operations.

Yes, obviously. The point is that Common Lisp does an absolutely crap job of making these things actually consistent from the point of view of the user. The language is littered with entirely inconsistent behavior and choices.


> Common Lisp does an absolutely crap job of making these things actually consistent from the point of view of the user

Common Lisp provides a decently designed sequences abstraction which allows the encapsulated vectors and traditional unencapsulated lists, as well as strings, to be manipulated not just similarly, but by exactly the same code.

This was developed in recognition of the exactly the issue that you are getting at. Forty years ago, the group of people designing Common Lisp were aware of this desire to have a consistent access method for different kinds of sequences and they did something about it.

The charge of "absolutely crap job" can only be fairly leveled at a language that make no effort to provide for uniform treatment of encapsulated vectors and unencapsulated lists.

What looks "surprising" depends on your background. I agree that Lisp contains surprises for someone who has programming experience, but that experience is limited to Python or Javascript which provide only encapsulated arrays as the principal sequence aggregation mechanism.

I had two decades of programming experience coming to Lisp, and had written C programs which used both unencapsulated lists:

   list_node *list = NULL;
and I had written code with encapsulated ones:

   list_block list = LIST_EMPTY_INITIALIZER(list); // eg circular, expanding to { &list, &list }
leveraging the advantages of both.

So, I wasn't surprised in any way. From the description of NIL and the cons cell linkage in the book I was reading, I instantly recognized it instantly as the unencapsulated style of lists, like "list_node *list = NULL".

In C, it would be obvious that this can't work:

   list_node *list = NULL;
   list_append(list, list_node_new(42));

   // wrongly expecting list to have changed from NULL to non-null!
but that, with an encapsulating list block object instead of a raw pointer to a node, this could work:

   list_block *list = list_new();
   list_append(list, list_node_new(42));
(Because C does not provide either of these, you can't blame the language for misunderstanding anything: only yourself, if you wrote that list yourself, or the library author.)

I remember that was intriguing to me how Lisp is getting away with the unencapsulated single linkage for everything: like how that is the list structure for the entire language. In the light of the functional programming (you can always just keep consing up new conses to transform lists) together with the garbage collection, it very soon clicked for me. I remember thinking that if we could just keep mallocing new nodes and not worry about freeing, that would be pretty nice to work with in C.


Maybe the reason is Marketing ?


Are there any implementations of good type systems on top of lisp (let's say approaching Haskell's in capability)?


There is! Coalton: https://github.com/coalton-lang/coalton

> Coalton is an efficient, statically typed functional programming language that supercharges Common Lisp


Typed Racket is getting steady development and focus but more closely resembles Scheme.


I have libraries of functions written in pure sh.

I use those functions in both scripts and my interactive shell (which is bash).

I cannot do the same with zsh as my interactive shell if scripting is supposed to stay POSIX/bourne shell. Which I want as I often do this in systemwide scripts or modify existing scripts (all plain sh). The incompatibilities do matter.


> It's also dynamically typed which disqualifies it for large collaborative projects.

You can add type declarations and a good compiler will check against them at compile time: https://medium.com/@MartinCracauer/static-type-checking-in-t...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: