I make fresh tomato pasta sauces this way as well as the cheese based ones sometimes. A bit of butter and olive oil in the sauce, minimal water in with the pasta (I really like orecchiette) and finish the pasta off in the sauce with a bit of the minimal remaining water. Very clingy, very silky.
The pasta plate is called Primo Piatto meant to be eaten as the first part of the main course. The Secondo Piatto is the second part of the main course usually a meat dish, is meant to be eaten after the pasta. Hence why, the pasta course is small and needs to be small. However, there are exceptions, where pasta dishes can be the full main course on its own. The reason most italian pasta dishes are only a part of the main course is because they're not a balanced meal, and therefore will not properly feed you.
The concept of having multi-course meals is foreign to the USA both historically and culturally. The word "Entree" actually means appetizer in french, while in the USA it means main dish for whatever reason. Its even more ridiculous that USA restaurants that pretend to be fancy put "entrees" instead of "main dishes" on their menus.
> Its even more ridiculous that USA restaurants that pretend to be fancy put "entrees" instead of "main dishes" on their menus.
I smell "epic-ism": you know the French definition proximal to your own lifetime, but not the earlier one that essentially meant hearty meat courses.
Also, there were even "large entrées" from the same period. From Wikipedia[1]:
"Large joints of meat (usually beef or veal) and large whole fowl (turkey and geese) were the grandes or grosses entrées of the meal."
Maybe that definition was just from an influx of "ridiculous Americans" traveling to France during the Enlightenment so they could pretend to be fancy.
Thank you! I was totally caught off guard by the swiftness and harshness of the response to what I thought was a pretty innocent comment about the joy of Italian pasta.
If I had to guess, the pasta serving in the video was no more than about 150-200 calories. Dry pasta is 370 calories per 100g and pecorino is 390 per 100g. That serving was maybe 30g worth of pasta and maybe 10g worth of cheese.
Needless to say, that’s a snack-sized portion of pasta, not a meal.
I wouldn't sweat it. It was probably just one of our resident "transcendent biohackers" who thinks eating is an impediment to maximizing their human potential.
Stim use is an effective appetite suppressant, after all.
740 kcal of pasta and cheese went into the dish, and under half (370 kcal) ended up on that plate. People vary, but even short, old people with no exercise have a maintenance metabolism of 3x that. To maintain my weight I need 10x that.
I suspect most of the reactions here are cultural (do you get most of your calories with breakfast, are restaurant meals larger or smaller than home meals, is that the only food with the meal or do you typically have other starters and desserts, do you snack throughout the day, ...).
I typically eat once a day, sometimes adding in a small breakfast, I don't snack, I don't really care for desserts, and certainly for a weeknight meal I might make cacio e pepe but definitely won't also whip up breadsticks, cocktails, and a few sides most of the time. Nearly anyone with those eating habits would find this a small amount of food (in the sense that if they ate it instead of their normal dinner regularly they'd lose weight quickly, at least 3lbs per month, 25lbs in my case).
Even people who eat 3 square meals and snack some (no more than half a family-size bag of chips) through the day will find this on the small side (losing weight if all 3 meals are that portion) if they're moderately active, no older than 40, and no shorter than 5'10.
> Ah yes Italians, famous for being stingy with portions, feeding you the minimum portion possible.
So, this is an often [0] repeated misconception: you have to differ from family style eating, and that of professional cuisine gastronomy. The former is what you are attributing this POV, whereas a professional kitchen that focuses on the tre/quattro piatti format (prix fixe) the whole point is to provide small(er) portions between courses, often in order to get the waiter/sommelier to drop the wine card to match the palette/dish, which is where the real money is made in restaurants.
When I ran kitchens in Italy, we often sold proteins at a loss (at least the first 5-10 orders) in order to promote the local wine/vineyards that we got a massive discount on by buying half the harvest/yield seasons anf sometimes years ahead and could mark-up the bottle--it's your basic loss leader approach, and pre-service is often where these things are tweaked and refined with a very clear intention for FOH to move the booze to make up for the losses in the kitchen. The owner I worked for during this time had a family owned dairy/caseficco business where we got our cheeses where we also got lamb from as well depending on the time of year.
Its fun, to an extent, especially with weekend specials and selling out low-cost high margin dishes every night, but honestly after 3 seasons of this I realized I was just a middle man for back room deals with vineyards/distilleries that happened long before I ever worked there. I realized I preferred to cook seasonal in agrotourism settings as it hit all the goals I wanted to accomplish, and took the spot light more towards the farms/farmer, where I also worked at in the morning while working in kitchens in Europe.
Sidenote: While I had half of Sundays off and free access to a table on the slow hours (along with anything on the menu and maybe a bottle of lambrusco or prosecco on a good week) when I was in Italy, the truth is I would peddle my bike to the nona's house to eat for like 4-5 hours with a nap which had those generous portions you are mentioning.
Thanks for clearing this up because I was confused by the other comments about how multi course meals are common in Italy but unknown in the US.
So nobody in Italy is going to nonna’s house and sitting down to 10 courses of tiny amounts of pasta, proteins, vegetables, soups, and salads. They’re sitting down to one big feast with a much smaller number of dishes being passed around the table, like you’d see in The Godfather.
> So nobody in Italy is going to nonna’s house and sitting down to 10 courses of tiny amounts of pasta, proteins, vegetables, soups, and salads. They’re sitting down to one big feast with a much smaller number of dishes being passed around the table, like you’d see in The Godfather.
For the most part yeah, we ate previously opened jars of pickled veg anti-pasto, salumi and ragu while drinking non-fancy house wine, but when I was living and working with a legacy family in Maranello we'd sometimes go to Modena/Bologna/Reggio Emilia to a patrons/business partners home where expectations were different... we did a multi-course menu, but that was a business arrangement or celebration of some sort, hardly what I'd call a regular Sunday dinner.
I just liked going to the nonna's home to have whatever was made and rest for a bit and get away from work as I had already spent over 60+ hours on the farm/kitchen by weeks end.
Those days were so exhausting but incredibly fulfilling.
No way that’s 50g of carbs. They started with 150g dry pasta and the serving they plated was less than 1/5th of it. I’d be surprised if there’s 20g of carbs in that serving.
<< They keep coming up with weird “facts” (“Greek is actually a combination of four other languages) >>
Not as wrong as the author thinks. From Britannica.com:
"Greek language, Indo-European language spoken mostly in Greece. Its history can be divided into four phases: Ancient Greek, Koine, Byzantine Greek, and Modern Greek."
If I had to suggest where the “combination of four languages” idea came from, it would be from Homeric Greek (the language the Iliad and Odyssey were written down in). This was genuinely a complete mess, formed of a hodgepodge of different dialects.
From wikipedia:
“[Homeric Greek] is a literary dialect of Ancient Greek consisting mainly of an archaic form of Ionic, with some Aeolic forms, a few from Arcadocypriot, and a written form influenced by Attic.”
I’m not sure if this is a plausible explanation as I don’t have much experience using LLMs.
What happened was either the teacher is severely biased against ChatGPT and fabricated the fact to fit their narrative. Or ChatGPT gave the correct answer, but the student interpreted it wrong.
I do believe the students keep coming up with weird (correct) facts, and that this can be scary for a teacher who is stuck at a search bar.
Modules and libraries are addressable based on their names or URI:s.
"Unison eliminates name conflicts. Many dependency conflicts are caused by different versions of a library "competing" for the same names. Unison references defintions by hash, not by name, and multiple versions of the same library can be used within a project."
https://www.unison-lang.org/docs/what-problems-does-unison-s...
"Here's the big idea behind Unison, which we'll explain along with some of its benefits:
Each Unison definition is identified by a hash of its syntax tree.
Put another way, Unison code iscontent-addressed.
Here's an example, the increment function on Nat:
increment : Nat -> Nat
increment n = n + 1
While we've given this function a human-readable name (and the function Nat.+ also has a human-readable name), names are just separately stored metadata that don't affect the function's hash. The syntax tree of increment that Unison hashes looks something like:
increment = (#arg1 -> #a8s6df921a8 #arg1 1)
Unison uses 512-bit SHA3 hashes, which have unimaginably small chances of collision.
If we generated one million unique Unison definitions every second, we should expect our first hash collision after roughly 100 quadrillion years!
"
https://www.unison-lang.org/docs/the-big-idea/
Seems like identifying your library with a git tag would drop that risk to zero.
I guess what I'm not understanding here is the utility. Why is it useful to include multiple versions of a library in a project? Is this a limitation I've been coding around without knowing it?
Have you ever had problem where two of your dependencies are each using a different version of the same library? Or have you ever wanted to incrementally upgrade an API so that you don’t have to change your entire code base in one fell swoop? That is where things like Unison or scrapscript can make it very easy.
One reason for multiple versions of a library in a project is that the project wants to use 2 different dependencies, which themselves depend on incompatible versions of a third library.
I think it is something like Hoogle for haskell but instead of looking for the types of the functions you look for a hash of some kind of canonical encoding of the definition, so it is like an encoded knowledge graph but you should have to give rules in order to construct that graph in a canonical way.
Edited: What I thought was wrong, anyway the idea of above could be useful for something like copilot to complete definitions.
Content is by definition content addressable. x = 42 is a hardlink to every other instance of x = 42 if you will. What this does is more compact and practical content addressing, like Nix or Git. But realizing that there are always more than one way of expressing the same logic (with different hashes no matter how you canonicalize) makes me doubt it is a killer feature.
For what it's worth, and at the risk of sounding like I'm nitpicking, traditional Luddism is not dismissal of technology as overhyped or smoke-and-mirrors. Quite the opposite; the technology is recognized as very real and very impactful. The question instead becomes, "what happens to us, the ones it replaces?". And at least in the original incarnation, given an insufficiently satisfying answer, you smash some looms up to try and slow the encroachment of automation.
Neo-Luddism is still opposition to a lot of modern technologies, but for a very disparate set of reasons. For example, many oppose the mass adoption of social media due to perceived mental health and social impacts. Others oppose smart phones and "screen addiction." These things can qualify as "neo-luddism" even though the opposition is not rooted in job displacement.
I've often said that I am myself becoming more and more of a "neo-luddite" but it's purely for personal reasons. I don't want to see social media or smart phones disappear as I couldn't care less about what other people do with their lives. I just find that the older I get, the less I want to use modern tech in general.
It might just be burnout and boredom. I am now middle aged and I've been coding since I was 10. I used to be extremely enthusiastic about technology but as time progresses I have less and less interest in it. The industry in which I have based my entire career just doesn't excite me anymore. Today I just couldn't care less about ChatGPT / "AI" / LLMs, Bitcoin, smart phones, video games, social media, fintech etc. In my free time I find myself doing more things like reading books, going hiking in the backcountry and pursuing craft-related hobbies like performing stage magic with my wife and partner.
I thought that this was me ("The industry in which I have based my entire career just doesn't excite me anymore."), but then I realized I actually am mostly just interested in programming the way some people are interested in any other skill or machine. Some people learn guitar because they need to make music and play out. Others just love playing the guitar, even and especially when nobody is watching, for its own pleasure.
To me, there is no greater toy than a programming language. ChatGPT is neat, but it isn't a programming language. Same for blockchain or agile or whatever other trend is happening in the industry. Some of the trends literally are new programming languages! Golang is really fun! So is TypeScript!
Some trends or technologies make programming even more fun (in my opinion anyway) and I embrace those feverishly: distributed version control, CI/CD, pair programming (sometimes it's even more fun with a friend!), configurable linters like Perl::Critic, Intellisense, JUnit-style testing frameworks. All this stuff helps me feel more in control of the computer (or distributed cluster of computers), which I've discovered is the main thing that gets me off about programming.
I'm even still hopeful that LLMs will have some role to play in my having more fun with programming. I've tried CoPilot and so far it hasn't grabbed me, but maybe this will change. In any case, there are clearly other people having fun with it so I guess that's good. Maybe somebody can find joy by debugging GPT-4 prompts the same way I enjoy pouring over stack traces.
I'm a "maker." Although I try to bring a level of "craftsmanship" to my code, and I care a great deal about code quality, refactoring and solving problems at the code level - and I definitely enjoy the process - it is still a means to an end. It is the configuration of raw materials that contribute to the final form of something useful and tangible.
The most tragic part is realizing that it is very unlikely that I will ever care about what it is that I am producing in tech. I was self employed for 15 years and that was extremely rewarding because the business and the product was my vision, my creation etc. Now that I am back in the job market I find that what was a career for 20 years has become "just" a job. I am making something, and that matters, but I'm not making something I would personally use as an end-user. And that is not a slight against the things I am making. They are useful to someone. Just not to me. I have spent the last few years thinking about what it would look like to make something I myself use and that's when I realized that, relatively speaking, I hardly use any tech as an end-user in my personal life at all.
I could have written this myself. I'm currently going the scary "bootstrapped founder" route because at least the tool I'm paid to work on is something I care about and I can put a lot of my craftsmanship on.
But in general my long term goal is to go write mini Lisp interpreters in a cabin in the woods, and move away from the direction Big Tech is going.
I love computers, software, but I would not say I love technology anymore, nor I think that the Internet is a net benefit for humanity anymore. It's been quite hard to accept that my view of the tech world has turned upside down in no more than a couple years.
Though when my wife and I have talked about "going off grid" and living remote, she wanted to understand my limits and asked about electricity.
I pointed out that electricity led to the discovery of logic gates, which led to integrated programmable circuits, which led to the Von Neumann Architecture, which led to Ethernet, which led to the Internet which led to Twitter.
The demographics of this site are getting older (you can see a recent-ish poll on the site for some indication.) The same phenomenon was on Slashdot in the '00s writ large also. Lots of older disgruntled engineers who hated how the MPAA and RIAA controlled all of software, snooped everyone's packets, and how when they started in the industry in the '80s people took pride in shipping shrinkwrapped software. It's an interesting pathology but, like all pathologies, it becomes tiring on the site when people bring it up constantly.
Hype cycle topics seem to attract opinionated folks who have strong feelings on topics and have the need to shout them from the rooftops, whether that's doomer, booster, or luddite. That's what I find exhausting.
I feel the same. I don’t have the same kind of enthusiasm now as I do a kid.
I’ll also say, I think LLMs are different than bitcoin. It has its own killer app, and it has tremendous social impact, not necessarily positive. If anything, crypto doesn’t really make sense without AIs.
One thing though is that the foundational models are created and controlled by big tech. It’s been compared to silicon fabs and its scales of economy … However, I don’t see a future where foundational models can only be created by large orgs with a lot of resources is something beneficial for society.
You sound like me. Middle aged and tired of the bullshit.
The difference is I'm mostly tired of the social garbage that keeps piling up on top of things. The competitiveness, the pressure to "be productive", the grifters, the capitalists who put money before people, the bureaucrats. It's never enough more more more faster faster faster meanwhile the roadblocks that are put in place get bigger uglier and stickier than ever.
I just stopped participating in that stuff. Got off Facebook. Got off Twitter. Curated my Reddit feeds to be built around useful and helpful communities, not reactionary meme-ified BS. Started reading more. Started using RSS again (but again a reduced and focused subset of useful sites).
I'm feeling much better and I still find I enjoy technology. Microelectronics, 3d printing, functional programming, distributed systems.
I'm working on a cloud connected garage door opener. I don't care that it's never going to be productized. I don't care that I can go an Alibaba and order one for $15. I'm just doing it because I want to work on my microelectronics skills and I find it fulfilling. I'm doing it my way, at my own pace, without the BS.
If I had to pick one "beginner resource" for the absolute newbie then it's hard to beat the book "Mark Wilson's Complete Course in Magic." From there you can decide what interests you and thus where to go next, since magic is such a broad field.
One of the reasons I love it so much is that it is a multi-skill discipline and it's really a type of theatre so it can be kept as simple and narrow or as broad and open ended as you want it to be.
I argue that I don’t think those questions about looms have been adequately explored, and we’re still paying for that even as another wave of innovations roll in.
It’s broader than simply, that people lose jobs to automation. A much bigger societal impact comes from treating people as automatons, which is how business are incentivized to end up with non-people automations.
You forgot about the rest of us. Developers who are interested in the subject and want to read about the latest developments and techniques. We're actually interested in the computer science.
> Somehow the intersection is right: AI is not really smart, but it will replace a lot of human activity anyhow.
Which will really show how much busywork we humans do in a lot of areas. If -- before AI is "smart" -- it can impact humanity.. well, i feel it says more about our current society foundations than it does AI heh.
“The bureaucracy is expanding to meet the needs of the expanding bureaucracy.”
And we thought there was an epidemic of bullshit jobs before! Not only will everyone be spending their days churning out this garbage, but everyone will be forced to read it as well!
This is not the dystopia I had in mind! It’s much more boring and missing the cool mirrored shades.
Yah. I mean, a lot of human activity in employment is not really smart.
Anything that you can make into a fairly formulaic job for someone who's not really interested in doing it or improving too much is probably not very intellectually demanding (but perhaps may be a bit demanding in world perception or actuation than current AI/robotics systems).
- The scammers - how many well-meaning people can we use an LLM to build meaningful (to them) relationships with, to the point where they're willing to just send us money?
- The griefers - how much discord can an LLM create, for the lulz?
(to be fair, those might be subsets of "realists", but I think they're important subsets)
In the circles I'm in 2019(GPT-2-era) AI was seen through the lens of the artist as a new sort of artistic tool - something that could enable new forms of expression and take media to places it had never gone before.
Sometime toward the end of the diffusion days though, the property rights scissor cut through the community. The conflict had a dramatic cooling effect on AI-created/assisted content.
I just think it's neat how the perception of AI shifts depending on proximity.
- the realists: it's just a stochastic parrot spitting regurgitated content back at you. Good at summarizing search results. Unreliable and will never be. A toy. Do not use if errors are expensive.
There's also a lot of faux-intellectual attention grifters who don't really have any beliefs other than whatever the local kool-aid is and will say whatever gets the number in the top right to go up because that makes monkey brain feel good.
These types of participants are the most easily simulated with AI because they're (whether they know it or not) just optimizing for a variable, something that existing AI tech is pretty good at so I expect their numbers to explode before the other three types you listed.
That's super cool. I hate the pricing. I typically know everything I need to for day-to-day usage of the shell and only do things that require discovery every few months. 100 queries wouldn't be enough in those months, so I'd have some months where I'm paying for nothing and the odd month where I don't get enough usage.
$9 per month also makes it costly enough that I wouldn't buy it as a "just to have" kind of tool. I don't think I'd get $100 of value vs searching online, especially since I attribute some negative value to tools that can be taken away from me. I don't want to pay forever and be dependent on something that could disappear tomorrow.
I don't get why something like that needs to be an online service. I don't know much about AI, so maybe it's a lack of understanding on my part, but why can't I simply have a copy of the trained model on my local machine where there's no ongoing cost (to you) whenever I run a command? Isn't an online API a complex solution to a problem that could be solved with a local app + data?
Maybe I just lack understanding and the models are too big or the compute required to make a query is huge. If you could give some insight I'd genuinely appreciate it.
Even though I'd never buy it as a subscription, it's the kind of thing I'd pay for as a perpetual app. I'd wouldn't hesitate to pay $50 if I could install it on my machines and forget about it until it would be useful. I'd also expect to pay for updated versions of the models whenever I need them.
Regardless, I think it's amazing as a discovery tool. I don't mind reading 'man' pages to figure out details, but I always feel like it's a hassle to discover what command I need for certain tasks.
Also, I'm probably an outlier since I make a lot of effort to avoid tools that rely on an internet connection to function. IE: I won't rely on GitHub. I'll use it, but only as a push mirror.
Your pricing is terrific. The free plan provides just enough daily queries to try it, and the monthly plan might be a good fit for a business.
I'm going to sign up for the free plan, not because I need such a tool, but rather so that my boss might see me using it and decide that it's worth $10 a month to her.
const questionCriteria = {
filter: '!-*f(6s6U8Q9b' // body_markdown and link
}
I thought maybe it is a hard-coded CSS element name in StackOverflow answers, judging by the context, but it's not that. Could you shed some light on this?
Found in the How2 source file `how2/lib/how2.js`. Thanks.*
It’s an encoded bitmask indicating which fields to include from the API. The specific layout is an opaque implementation detail; the value is typically generated by the api playground Ui
I made a terminal AI as well based on ChatGPT: https://github.com/shellfly/aoi . I intend to provide a way to run the shell command automatically which can reduce lots copy and paste.
I suspect that's a trick, too. I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.
Otherwise you could identify with a tree, or the wall, or happily cut parts of yourself. Pain is not painful if you don't identify with the receiver of pain.
Thus I think you can have unconscious smart minds, but not unconscious minds that make decisions in favour of themselves. Because they can identify with the whole room, or with the whole solar system for what matters.
Would you even plan how to survive if you don't have a constant spell that tricks you into thinking you're the actor in charge?
A lot of the things going on with ChatGPT make me wonder if AI is actually very limited in its intelligence growth by not having sensory organs/devices the same way a body does. Having a body that you must keep alive enforces a feedback loop of permanence.
If I eat my cake, I no longer have it and must get another cake if I want to eat cake again. Of course in the human sense if we don't want to starve we must continue to find new sources of calories. This is engrained into our intelligence as a survival mechanism. If you tell ChatGPT it has a cake in its left hand, and then it eats the cake, you could very well get an answer like the cake is still in its left hand. We keep the power line constantly plugged into ChatGPT, for it the cake is never ending and there is no concept of death.
Of course for humans there are plenty of ways to break consciousness in one way or another. Eat the extract of certain cactuses and you may end up walking around thinking that you are a tree. Our idea and perception of consciousness is easily interrupted by drugs. Once we start thinking outside of our survival its really easy for us to have very faulty thoughts that can lead to dangerous situations, hence in a lot of dangerous work we develop processes to take thought out of the situation, hence behaving more like machines.
> I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.
I kinda think the opposite: that the sense of identity with every aspect of one’s mind (or particular aspects) is something we could learn to do without. Theory of mind changes over time, and there’s no reason to think it couldn’t change further. We have to teach children that their emotions are something they can and ought to control (or at the bare minimum, introspect and try to understand). That’s already an example of deliberately teaching humans to not identify with certain cognitive phenomena. An even more obvious example is reflexive actions like sneezing or coughing.
Funny, I've been repeating around "Consciousness is just a feeling" for a while. My question is also "why has it evolved at all" ? Keep reading for a hint.
Hunger, for example, is just a feeling.
Thirstiness is just a feeling.
But we don't build complex theories of the world around Thirstiness. Somehow we do with Consciousness, because I suspect it tricks us, playing recursively with our thinking.
Consciousness, which nobody really ever defines clearly, it's probably just a name around a bunch of feelings we have.
It's clear why Thirstiness has evolved: to get us to find water. Not salty one.
Probably Consciousness does something like that. For example, it might be just a feeling of oneness to keep us intact, across peripherals (legs, arms) and time (now is really a continuation or whatever we were doing before, you need a feeling to enforce that).
>Consciousness, which nobody really ever defines clearly,
I think this is really the core of the problem. My non-scientific belief is that when learn enough about the brain, we'll learn that "consciousness" is a good descriptor for the human experience of having a mind, but not really a meaningful word scientifically.
Stack depth is finite. If consciousness is cognitive recursion, we only can get so far down before the results are garbage. My working theory is that "max cognitive stack depth" is a measure of consciousness.
The scarier concern is that consciousness is a useful intelligent behavior bootstrapping tool, but once we encode everything it has helped generate into cultural DNA or literal DNA it becomes an evolutionary redundant appendix-like organ that will atrophy in the coming generations (See Blindsight by peter watts)
That analogy doesn't work at all, since "the body contains the brain" does not describe a recursive relationship.
By contrast, you see people trying to "explain consciousness" by telling a story that assumes consciousness. When someone makes a statement like "Consciousness emerges from mechanism X in the brain.", every observation that lead to this statement originated in someone's consciousness.
It's less obvious than, but completely analogous to, how it's impossible to decide whether we live in a "base reality" or some sort of simulation - everything you could say about this reality that we perceive is contingent on that very reality.
It's not weird because where you expect it to be is where it is, if it were somewhere else (some distant ansible transmission) you would be used to that and think it weird to imagine it being in the body.
Although I understand what you're pointing at, I think you have focused too specifically on feeling or reason (i.e. thought). I am willing to go as far as to say that all our actions, thoughts and feelings are mechanical/predetermined - but this all doesn't cover "existence". That's where I believe consciousness truly lies. It doesn't think or decide what to think, nor does it decide how to feel and it does not plan or take actions. It does, however, experience it all. Your life-story is a roller-coaster and consciousness is that thing going on for the ride.
IMO, the true nature of consciousness sits at the same level as the nature of the universe. It touches on what it means to simply "exist".
Having said that, I'd be happy to know what others think.
I see what you mean. The "existence" part I suspect is a trick. I think "That thing going on for the ride", is actually just a feeling evolved for a bodily purpose.
The body is doing the ride, with all its chemical gradients pulling the levers, and the thing we call Consciousness it's just "the feeling of the ride" which has evolved to keep some temporal/spatial unity. You could have legs, and memories, without being able to connect them to you.
For example, without that feeling of unity, the brain wouldn't not know which subject all the things are related to.
Something would be thirsty or hungry, but it wouldn't know that is the same thing with those legs and memories that it was referring to a just a moment ago.
True, but that ability to relate a certain feeling to an internal issue (e.g. thirst or hunger) is somewhat of a learnt behavior. It was part of the childhood ride that has now long been forgotten.
This still, however, doesn't solve the issue that those signals "exist" somewhere from a certain point of perspective. I think sight highlights this to me the most clearly. Although vision serves a purpose in deciding motor function, it also simply exists. I don't "feel" that I see - I simply "see".
Going back to how you phrase it - who or what is "feeling"? Everything we do or think may be mechanical, but there is a distinction between I and my dog. I am not riding the dog-life roller-coaster, I am riding my own human-life roller-coaster.
If I cloned myself, I'd be happy to state that both versions would think and behave as me, but I would only exist in one of them.
I wouldn't describe thirst as a feeling, but maybe it's because we can obviously tie a physiological state to it. If consciousness was a signal/feeling too, what process would this be entertaining ? the need for the brain to keep chunking the world and produce new memories, ideas ?
This seems like a very accurate description, especially the observation about temporal continuity, which we take for granted, but on occasions it can feel like a fragile illusion.
> Consciousness, which nobody really ever defines clearly
"An organism has conscious mental states if and only if there is something that it is like to be that organism -— something it is like for the organism." [1]
We don't know what consciousness is OR what it's for. There is no theory of what a brain is capable of with consciousness vs without or if this is even a valid question.
I don't think this is an accurate statement, and I also do personally have my own theory. I would be quite shocked if NO ONE had a theory of what consciousness is for.
You can claim non-cognizance on just about anything. The ability to reflect on first order sensory inputs has clear evolutionary advantages. Either that is part of what you're calling consciousness, or it is not. If you fall into the latter camp then you're just debating semantics of an english word.
The ability to reflect on inputs (or even on internal processes) doesn’t necessarily imply a subjective experience—which is what most people mean by consciousness even if they’re unclear about it. Taken generously, it would be appropriate to replace “consciousness” in the GP’s argument with “the subjective experience”. In so doing I think I’d be hard-pressed to argue with his claims.
I'm talking about the hard problem, which is nothing to do with whether a system can analyse/reflect on inputs. Non concious computers can do this. The truth may be that analog computation in physical systems like the brain results in conciousoness. I.e you get conciousoness but it doesn't do anything, it's the result of something. The other option is that it enables functions that are not possible without it. We just don't have answers to these questions.
There is also switchlang¹ which isn't quite the same thing, but provides some of the functionality of PEP-622 and pampy. I believe it is notable for including a nice descriptive README, and also having a small/simple implementation.
I personally prefer the pampy internals, but quite like the context manager usage from switchlang. I don't even know which bikeshed I want to paint, let alone the colour.
Author here. Pampy and Pampy.js (https://github.com/santinic/pampy.js) are more similar to Lisp-style Pattern Matching. Of course they cannot provide static analysis, but they can improve understandability, and they are useful with nested structures which are typical of how you solve problems in dynamic languages (less types, more nested dicts-and-lists).
reply