Hacker News new | past | comments | ask | show | jobs | submit | codewithcheese's comments login

> So in this scenario you could input something like: { NodeType: Person, EdgeTypes: [IS_PARENT_OF, IS_CHILD_OF] }

RDF, OWL are existing formats for defining a schema


Domain driven design is well aware that is not feasible to have a single schema for everything, they use bounded contexts. Is there something similar for the semantic web?


In the Semantic Web, things like ontologies and namespaces play a role similar to bounded contexts in DDD. There’s no exact equivalent, but these tools help different schemas coexist and work together


Isn't that the point of RDF / Owl etc.?


The point of RDF is to raise the problem to the greatest common divisor level of complexity - a hypermedia graph of arbitrary predicates about arbitrary objects.

OWL is a modeling language to describe ontologies, e.g. some constraints people have agreed to follow about how to structure the information they publish in graphs. It can also be considered an advanced schema language.

The idea of a bounded context in DDD is that it is not a good use of time (or indeed may not be feasible at all) to get a single ontology for an entire domain, so different subdomains may be unified by some concepts but have differing or overlapping concepts that they use internally. Two contexts know they are talking about a product called "New Shimmer", even if understands it as a floor wax and the other uses it as a dessert topping.

The two pillars of the semantic web are public data and machine understanding, which IMHO pushes strongly toward the (often unachievable) goal of a single kitchen-sink schema.


Yes, looks like they could be adapted to create Story Maps


Looking forward to their durable execution Workflows. Writing Temporal workflows has great DX but their pricing and hosting requirements put it out of reach for many projects.


That's openrouter, they are listed


Think chromium compile is widely used


Can you point me to a comparison site? Didn't find a M3/M2/7950/... comparison site for chromium compile times :-(

(Even phoronix is scares and mostly focuses on laptops - I have no laptop)


There probably isn't a site that just comparess chromium compilation time, but you can find the number in many YouTube and text reviews.


> - Monorepo with FE, BE and shared types - React + Next.js (frontend) - pnpm - Nest.js (server) - Tailwind - Material UI - Apollo (GraphQL) - Jest (Testing) - Typescript + ESLint + Prettier + Husky - Turbo - TypeORM - Segment - Database migrations - Docker - Logging (Pino.js)

How will non-tech founders develop their product on this stack? That's a stack for developers to build the product.

This is a very hard problem, when you see a fragmented ecosystem its because players in the ecosystem have wide ranging and demanding requirements.

You will have to acquire your target customer when they are ready to setup this stack but before they have done so themselves. This would require very high brand awareness. Like "Oh I was going to setup on AWS but XYZ makes it so easy". Only a few companies have achieved this type of awareness in the devops space. Heroku comes to mind. None of them are indie-hacker projects.


Thanks, Codewithcheese. To clarify, it's not for non-technical founders. It's the opposite—it's for first-hire engineers or technical founders exclusively.

You raise a good point with Heroku, but that's a "managed service," like Aptible or Vercel, which offers simpler alternatives to AWS and is used throughout the lifecycle of your app. I'm specifically providing a service for initial setup and deployment only. 'Pay-once, use-once'.

For the MVP, this is targeted at a market that wants to host on AWS, along with an array of other extremely common web-based SaaS services, such as authentication, feature flagging, and monitoring etc


How are you using Generative UI?


Sorry, not much to show at the moment. It is also pretty new so it is early days.

You can find some open-source examples here https://github.com/chatbotkit. More coming next week.


It can be faster and more effective to fallback to a smaller model (gpt3.5 or haiku), the weakness of the prompt will be more obvious on a smaller model and your iteration time will be faster


great insight!


Yeah also prompts should not be developed in abstract. Goal of a prompt is to activate the models internal respentations for it to best achieve the task. Without automated methods, this requires iteratively testing the models reaction to different input and trying to understand how it's interpreting the request and where it's falling down and then patching up those holes.

Need to verify if it even knows what you mean by nothing.


In the end, it comes down to a task similar to people management where giving clear and simple instructions is the best.


Which automated method do you use?


The only public prompt optimizer that I'm aware of now is DSPy, but it doesn't optimize your main prompt request, just some of the problem solving strategies the LLM is instructed to use, and your few shot learning examples. I wouldn't be surprised if there's a public general prompt optimizing agent by this time next year though.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: