Hacker News new | past | comments | ask | show | jobs | submit | subwayclub's comments login

The most popular "how to synthesize" article series is Synth Secrets which ran in Sound on Sound Magazine over a period of years. You can find those online.

Regarding what tool to use, a modular like this(albeit in physical, analog form) is the "original" way to do it. Modular sound can do just about anything provided you have the modules and will to program it. Later on in the commercialization of synthesis manufacturers built smaller semi-modular or fixed-path synthesizers which don't let you patch anything anywhere but are considerably simpler to achieve a result with since their basic architectures expose all the essentials of sound character without being so much of a programming exercise: the Minimoog and Oberheim SEM, to name two, just sound good out of the box and generally sound good no matter how you turn the knobs, and their designs are widely cloned even today.

This era lasted roughly from the early 70s through the early 80s after which synth programming soon became complicated again, this time by digital technology allowing everything to be tweaked and patched and perfectly recalled from memory, provided you were willing to dive through menus. Synth programming turned into a speciality in this era; everyone who just wanted a sound used the built-in presets.


Sometimes a "tool" is as simple as agreeing to enter a certain kind of conversation.

For example, when martial artists practice, they mix the study of technique with its application in a competitive setting. This distributes the learning into a feedback loop amongst the population of students, with good techniques surviving and evolving in their application over time. Gradually the dojo settles into a style based on the direction of its teaching and the kinds of students it recruits, which can then be further tested against the wider world in a tournament.

In comparison, if you just watch a video and try to do the techniques with your buddy at home, you don't enter into that conversation. Nothing is being tested thoroughly so your feedback is limited at best and the training probably won't hold up when brought anywhere else.

For another example, having a creative conversation requires taking a starting point or thread of thought and allowing it to be pursued farther and farther without shooting it down or completely terminating the thread. In this scenario the conversation partner or partners need a mindset that can spot opportunities for coherent elements that weren't necessarily obvious: rather than focusing only on execution, they can adapt concepts and techniques from one scenario to another effectively to generate new ideas. Over time this process eventually suggests some form of execution as a way of proving the idea, but not necessarily a "final" execution. The creation of a large work is an accumulation of this process of having the conversation and gradually adapting elements into the work as they are found relevant.

In that light the tangible "tools" of things like gear, software, etc. play a somewhat instrumental role. Sometimes they are the source of feedback because the workflow can convey success or failure(error messages, misplaced pixels, etc.) At other times they are more like a component of execution and require a whole extra design phase in order to convey feedback: development tools play this role with respect to application software since having the program compile and not crash, or even fulfill some written spec, is insufficient in telling you if it's really doing the right thing. And when highly abstract techniques like mathematics are brought into play the effect is almost magical since the coding tools offer next to zero feedback on whether the resulting algorithm produces correct results: you can create some tests but the basis is all steeped in a theory of operation that renders the physical program structure irrelevant.


I don't think there is a "technical profit" to be had, though.

The output of the effort put into technology is a new process or workflow, rather than raw "tech stuff". It's a multiplier to a different input, some other task or application, and so the creators of the technology are frequently disconnected from feedback about its actual use.

For tech to be useful it has to succeed at creating enough leverage that you would use it over what was there previously, and plenty of technical projects get stuck on that point: they show an immense engineering effort but are solving the wrong problem because they don't optimize the desired workflow, or they expect everything to change around the product's workflow.

In that light, in a universe where most of these designs are bad and misguided and deserve to be put out to pasture, the cost-center mentality makes total sense because it asks the design to prove itself as early as possible. And the outcome of that in turn is the familiar rushed, unmaintainable prototype-in-production scenario.

New technology efforts always have this element of being a judo match: if you can find just the right time and place to strike and take the problem off-balance you can get an instant win, but the same strategy at a different moment doesn't work at all, no matter how hard you flex your coding muscles, and would leave you better off not attempting any new tech.

The balance-sheet model doesn't explain when and where to strike, though. It just says whether or not you're trying to do so this quarter, and any benefit will most likely show up on a different business unit's balance sheet.


> I don't think there is a "technical profit" to be had, though.

Ostensibly, every net-new feature increases the pool of people for whom the software, as a whole, solves a problem. The performance and usability of that feature further adds to that.* I'd argue that is technical profit, and is realized into real profit when it converts to purchases/subscriptions/whatnot.

[1] For the sake of argument I'm assuming a scenario where the feature and its performance/usability is positively received.


I never thought about the idea of "technical profit," but I think it is insightful. There is, indeed technical profit. When you have a quality design that fits the system's space really well, then you can maintain and add features really quickly and safely. This is technical profit.

When Paul Graham talks about how writing Lisp enabled his startup implement features quicker than the competition, that's technical profit. Google's internal systems that allow them to maintain thousands of machines; create large, distributed filesystems; and who know what else are technical profit.

There was an article recently about Boeing retiring the 747, and it commented how pilots would take a picture of the plane after flying it. (Passengers, too) In aerodynamics form is function, and I suspect that the outward beauty reflected excellence of design. The design served them so well they just now retired it after 40 years, despite all the advances since it was originally designed. If I'm right, this is technical profit.

Perhaps the idea of technical profit would be an easier sell to management, especially as management types see debt as useful, but programmers see it as a liability.


> The output of the effort put into technology is a new process or workflow, rather than raw "tech stuff". It's a multiplier to a different input, some other task or application, and so the creators of the technology are frequently disconnected from feedback about its actual use.

Technical profit is a feature or capability really that's possible because of some technical thing (e.g. code). But the 'revenue' of the capability has to exceed the cost of designing, developing – and maintaining – the tech for a profit to be realized.


Intel has more of the existing partnerships with OEMs, and that isn't going to reverse immediately with a strong showing from AMD. The new Ryzen Mobile notebooks are low-to-midrange entries and groundbreaking in their category, but Intel still covers that higher end segment.

It's easier to see this from Intel's perspective: Intel's biggest threat is from the onrush of GPU-driven computing, and Nvidia is the market leader there. The classic play is to starve them of oxygen by leveraging existing channels to get them out of the gaming notebook market. Thus comes this weird saga with AMD and Raja which is a win win deal in fact: Intel gets ammo to fight Nvidia now and a key hire for their own development later, and AMD gets another source of cashflow and marketshare plus a graceful exit for what has been reported as shaky, conflicted executive management at RTG. Although they've delivered decent hardware and software recently, there have been numerous PR flubs from the group and the business unit performance is questionable overall. There is an open question of what happens to RTG later on, but perhaps the answer is simply to survive in Intel's shadow again.


I can buy that. I didn't know there were issues at RTG. The point about OEMs not switching quickly is also a reality. It still seems really strange. The next move could be nVidia making their own high performance CPUs to integrate with their GPUs. Think risc-v here, they've already embraced it and the potential and freedom to innovate are wide open.


There's so much history in between that it's easy to cherry pick your favorites:

* New economic and political philosophies(merchantilism, Machiavelli's works, Magna Carta)

* Theological developments(Islam, Protestantism) and the various resulting conflicts as motivators for modernity

* Access to New World flora and fauna(new crops and livestock) increasing the ability of the nascent industrial state to support large populations at a lower cost

* Refinements to math and science that built on or revised ideas from antiquity(e.g. Newtonian physics, cell theory)

* Advanced craftsmanship in certain crucial precursor technologies like lenses, gears, etc. A lot has been made of how the industrial world started operating on a fast-paced timetable, but the Romans had no pocketwatches.


I recall seeing a comparison of DX7 vs dexed patches on Youtube that mostly found audible differences in envelope lengths, not timbre. That seems like something more straightforward to fix, though there's always the potential for a caveat like it being tied to the original hardware sample rate.


Scope variance is actually really common in mobile games as well as RPGs and simulation. All of them share the same kind of design goal of bending time to fit a certain play session or inner/outer game loop length. More "immersive" games ignore the logistical problems of extreme detail and offer tools like time compression, but designing for accessibility bends things the other way, towards providing a spliced-up timeline where events don't have to happen in any particular order apart from one that builds up a sensible narrative.

Hence you have the existence of things like equipment screens that can be popped up mid-battle, pausing everything to let you think about how to divide up loot. And the energy model in mobile F2P, which doesn't make a lick of sense but enforces limited session length and progression. You can take this even further when you think about power up items in pretty much any game.


I don't exactly agree. While the internal hardware does tend to receive less emphasis now, the physical build quality has increasingly been given a scientific treatment, and for good reason: it's the part that matters the most! A phone with an inferior camera, screen, and battery life is a worse phone for most people. So there are lots of reviews that subject these elements to tests with measurable results. That really wasn't the case years and years ago: the fact that the features of a device actually worked was often novelty enough.


I don't think the "physical build quality" is treated objectively at all. Instead it is all subjective. Take the material of the phone - "glass/aluminium is expensive to build this precisely, therefore it is a luxury experience". But on any actual objective measurements of physical properties - scratch resistance, weight, toughness - these materials are pretty poor compared to some alternatives.

However, the alternatives like eg. polycarbonate plastic are perceived as "cheap" even though they may be better from a specifications point of view.


Gorilla Glass is one of those rare product features that is so much better than the alternatives it's like magic. It's not strictly indestructible if you drop it, but you can just chuck it in a bag with your keys and not worry about it. Unlike polycarbonate.

Plastic-bodied laptops flex a bit more than metal ones. If you've got a large one and you regularly pick it up from one side the flexing can gradually crack the PCB. I agree that aluminium scratches up rather badly.

(Handily the Dow Corning website will list products using it, so you can avoid worrying about whether the vendor is using an imitation toughened glass)


Gravity Falls was a big hit with older viewers(grand supernatural mysteries plotline underpinning various family drama). There's also a new Ducktales - I've only caught the premiere of that but it looks like a fitting reboot. Steven Universe has been very good, and before that there was Adventure Time.

Good all-ages shows are definitely there in the mix, although the pressure to cut everything down to formulaic audience pandering has remained a constant.


I feel seriously remiss for forgetting Adventure Time now. Gravity Falls has been recommended to me before, though it escaped my memory since I've actually never seen it.

Even though it's a miniseries, I do feel like I should perhaps mention Over The Garden Wall for a few reasons: 1. More than once I've been told that if I liked Over the Garden Wall, I'd like Gravity Falls, 2. Over the Garden Wall was produced by Patrick McHale, creative director and frequent writer for Adventure Time, and 3. It's that damn good -- the basic story is presented simply enough that a kid mature enough to not get freaked out by spooky things would enjoy it, and there addition themes woven in that story that adults would enjoy it too (I know I did).


As a baseline recommendation try this: turn some of the apparently unfactorable pasta code into a large inline function and then start factoring from that point. Usually an opportunity will appear to realign the code in a new way that gets you something you didn't have before. Repeat a few times and you will end up with some original code abstractions that you never would have seen without a careful evolutionary process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: