HackerNews is very developer-focused. If you guys saw what a radiologist does on a 9-5 basis you'd be amazed it hasn't already been automated. Sitting behind a computer, looking at images and writing a note takes up 90% of a radiologist's time. There are innumerable tools to help radiologists read more images in less time: Dictation software, pre-filled templates, IDE-like editors with hotkeys for navigating reports, etc. There are even programs that automate the order in which images are presented so a radiologist can read high-complexity cases early, and burn through low-complexity ones later on.
What's even more striking is that the field of radiology is standardized, in stark contrast to the EMR world. All images are stored on PACS which communicate using DICOM and HL7. The challenges to full-automation are gaining access to data, training effective models, and, most importantly, driving user adoption. If case volumes continue to rise, radiologists will be more than happy to automate additional steps of their workflow.
Edit: A lot of push back from radiologist is in regards to the feasibility of automated reads, as these have been preached for years with few coming to fruition. I like to point out that the deep learning renaissance in computer vision started in 2012 with AlexNet; this stuff is very new, more effective, and quite different than previous models.
20 years ago I did some software to analyze satellite images of the Amazon to monitor deforestation. We got a result that matched the quality of human experts. The problem has always been political and economical, not technological.
I agree, nevertheless it's exciting to see some progress being made (see the heart flow and iSchemaView companies cited below). Incidentally, there is a new angle to the radiologists push back - interventional radiologist are generally quite in favor of automating reads. This year IR is its own specialty, but coupled to three years of DR training. This is the speciality that I'm entering. I want to be an interventionalist while advocating for the adoption of machine learning systems on the diagnostic side.
You are right that work in these areas tend not to be hampered by technology. It sometimes may even be hazardous to ones health considering the value of timber. 20 years ago is a long time ago, GIS must be still at its infancy then. These days one could pull free satellite images from NASA and probably could just diff images if not for those pesky clouds. Oh wait actually I think NASA does do earth imagery at various spectral ranges, https://www.odysseyofthemind.com/aster.htm.
Curious to know what sort of methods you used then if you don't mind sharing.
I'm not sure if there was more to your story that you missed. Was your software successful? Is it in use today? Is automation widespread in the study of Amazonian deforestation?
How did your software work, what techniques was it using, and what was the surrounding context relating to the data that was fed in and the output the model gave?
There might be some interesting things that can be learned from this kind of info and applied to the current status quo (I'm definitely not arguing that there is a sociopolitical element).
I suppose full automation may be a long way off, but maybe in the mean time we can do both? Have the radiologist evaluate the images, and then look at the report the AI generated and see if they agree. Use the radiologist to help train the AI, and use the AI to double-check the radiologist.
Maybe if MRI scans get cheap enough (due to advances in cheap superconductors or whatever) that it's economically feasible to scan people regularly as a precautionary measure (rather than in response to some symptom), then the bulk of the cost might then be in having the radiologist look at the scans. In those "there's nothing wrong but lets check anyways" cases, it might be better to just have the AI do it all even if its accuracy is lower, if it represents a better health-care-dollar-spent to probability-of-detecting-a-serious-problem ratio. (If the alternative is to just not do the scan because the radiologist's fees are too expensive, then it's better to have the cheap scan than nothing at all.)
PACS developer here - I asked the same thing a few years ago when I started in pACS about automation. I was told it's the need to have someone to sue that holds back the automation, not so much the technology.
Wouldn't that objection disappear once automation is more reliable than a human operator, since the human operator would be less likely to be sued when taking responsibility for an automated system's results than they would for simply trying to make the call themselves?
It's more about who gets sued. If you make software to fully automate interpretation you would be the party sued. If there is still a human in the loop, you are not liable for their errors.
I don't understand how that works - if you're an employee of a company, generally the company is liable for mistakes you make (unless you're also a director of the company). So for the company, whether they choose to provide their services based on the output of some software or based on the judgement of a human employee, the result should be the same.
I can see an argument that if the company was sued then it could try to push the blame onto the software vendor, but surely that would be decided based on the contract between company and software vendor, which is usually defined by the software license.
But in many areas of medicine, solutions have existed that outperform humans for years (not related to current deep learning wave). Yet they were not implemented for regulatory/legal reasons.
Machine learning is already used in Radiology. Chances are eventually Radiology will be the domain of machines. But it's going to take some time to get there. Healthcare is extremely regulated and closed minded.
Most of the people in the thread you listed above are clearly biased towards medicine and against computer science and machine learning. But machine learning has been having success in diagnostic medicine even well before the deep learning boom that thread talks about.
As someone who just struggled through writing a DICOM parser, _standardized_ doesn't always mean the same. For more examples, see RETS in the real estate world :)
When I looked at HL7 a few years back, it was standardized on in the most technical sense and was basically a big bag of hurt. Epic made a big chunk of their money just charging people to build interfaces over various PACS (and other) systems. Has that changed recently?
HL7 has matured even more now.
The linear evolution from HL7 v2 to HL7 v3 did not win the day. Most are now moving to https://www.hl7.org/fhir/
Main issue is with HL7 is not technical. From a business point of view, the incentive to cooperate with other systems via HL7 means another reason for a department to adopt a system other than yours.
Ruby makes working with things like HL7 super easy. Check out the super extensible HL7 ruby parser I wrote here: https://github.com/sufyanadam/simple_hl7_parser. It currently focuses on parsing ORU messages but can be easily extended to support any HL7 segment. Feel free to submit a PR :)
Yes and no. I spoke with an Epic engineer recently and he confirmed that Epic is still a front-end to HL7 databases. With that said, many PACS are adopting the WADO standard (https://www.research.ibm.com/haifa/projects/software/wado/), which provides a REST interface to radiology images. It makes it a lot easier to retrieve images for analysis, albeit you'd still have to implement DICOM/HL7 if you want to make a usable product.
About 5 years ago while observing an echcardiogram I remarked to the radiologist that this was something that could be automated eventually. I don't think she took too kindly to my remark. She was measuring the distance between various structures etc. and I recall thinking about how to implement something to reproduce what she as doing.
I was talking to a friend in medicine about this recently. Is total automation actually required right now? As others have said, I'm sure politics is a big chunk of why that hasn't happened yet. What if we had tools to help radiologists identify where they should focus their attention in a particular image, and even give them some hinting specific to the contents of that particular image. It would not only save time, but have the added benefit of helping stave off errors caused by fatigue.
Can confirm. Have a friend who is in radiology residency, and I was completely shocked when I found out what radiologists do. When I told him that his job will be automated before he retires, he argued profusely against automation. But it's primarily an image recognition task, which computers are quite good at already and will likely improve.
Yes! Stanford has started (at least) two radiology imaging companies. Heartflow and iSchemView (formerly RAPID): http://www.ischemaview.com/
These are examples of next-generation radiology companies. The current generation of products are focused on image storage and display. These new companies offer automated image analysis before the radiologist even looks at the image. iSchemaView does hemorrhage maps as soon as new head CT or head MRI is acquired.
> The challenges to full-automation are gaining access to data, [...]
It looks like everybody sitting on their data is hindering progress. Is there anything that can be done about that politically? I mean, in many cases the data belongs to the public anyway, unless people signed a waiver, but what is the legality of that?
Wow. It is crazy how uninformed you all are (no offense). Radiologists do not just look at an image and say "white thing there!". They incorporate the appearance, characteristics, anatomy, pathophysiology, patient's clinical history/age/medications/surgical history, and combine all of that information into some image findings but more importantly a focused differential diagnosis for the clinician. We have had computers reading EKGs for decades (a 2D line) and they still get it wrong 50+ percent of the time. No machine is taking over EKG's any time soon.
I'm sure machines will someday take over radiology but there will be many, many jobs automated before it (i.e. decades).
I wrote a really simple ruby gem to parse HL7 into ruby. It's super easy to extend, all you have to do is define a new class containing a function-to-column map. The key is the name of the function to call against a HL7 segment and the value is the position of the element in HL7 segment that you want the function to return. Check it out here: https://github.com/sufyanadam/simple_hl7_parser
I'm curious, how hard do you think it is to integrate new tech with the systems currently being used in radiology? Are the file formats and standards largely proprietary/closed?
It's not too difficult to make your own PACS, there's even open-source software for this. However, you need to be compliant and this usually means obtaining FDA approval of any device you want to install on the hospital network.
Have any of the methods made radiologists more efficient? If you were to imagine a system that made a radiologist 10x more efficient what would it look like?
Yes, massive increases in efficiency. A radiologist can read anywhere from 50-100 images per day (depending on modality CXR/MR/CT/etc). Voice dictation is ubiquitous and residents are trained from the beginning on how to navigate the templating software.
There are three areas that take a lot of time that radiologist would like to see automated:
Those are the hot three topics for machine learning. Personally, I think that a normal vs. non-normal classifier for CXRs would be more interesting because you could have a completely generated note for normal reads, and radiologists could just quickly look at the image without writing/dictating anything. Of note, hospitals and radiology departments typically lose money on X-ray reads because the reimbursement is $7-$20 (compared to $100+ for MR/CT). So if you could halve the read time, they might become profitable again.
Edit: In terms of 10x, what you'd want is a system that would automatically make the reads (i.e. radiologist report), and a very efficient way for radiologist to verify what is written. It's hard to make a pathologic read, but since roughly 50% of reads are normal, you could start with normal reports.
Before even going to AI/ML type of things, our startup (www.radfox.fi) is doing "simple" fixes to current workflows, first making sure radiology referrals are at the same time informative and decisive (=quality and accuracy of the referral)
And then bringing checklist driven analysis for radiologist.
My field is web development, and, to be honest, the most exciting thing going on is that more people are starting to complain about the complexity of development. Hopefully this will lead to people slowing down and learning how to write better web software.
As an example, one survey (https://ashleynolan.co.uk/blog/frontend-tooling-survey-2016-...) put the number of developers who don't use any test tools at almost 50%. In the same survey about 80% of people stated their level of JS knowledge was Intermediate, Advanced or Expert.
Yeah, this is one area where webdev is way behind other fields, and i think we're going to see lots of new tooling in this area soon.
We're currently working on a way to help devs test web app functionality and complete user journeys without having to actually write tests in Selenium or whatever. The idea to is let devs write down what they want to test in English ("load the page", "search for a flight", "fill the form", "pay with a credit card", etc), then we'll use NLP to discern intent, and we have ML-trained models to actually execute the test a browser.
You can give us arbitrary assertions, but we also have built-in tests for the page actually loading, the advertising tags you use, page performance, some security stuff (insecure content, malware links). At the end we hand you back your test results, along with video and perf stats. It’s massively faster than writing Selenium, and our tests won’t break every time an XPATH or ID changes.
>tries to do so by using an imprecise, context-dependent language designed for person-to-person communication to instruct a machine
???
Selenium is its own can of worms, but it absolutely sounds like you're using the wrong tool for the job here. The problem stopping people from writing browser-based tests is not that people can't understand specific syntaxes or DSLs, it's actually the opposite: people don't have a good, reliable tool to implement browser-based testing in a predictable and specific way that does what a user would intuitively expect.
Selenium fails here because it has to manage interactions between browsers, because selectors are hard to get right on the first try and continually break as the page's format changes, because JavaScript can do literally anything to a page and that is really hard to anticipate and address reliably from a third-party testing framework like Selenium, especially if components are changing the DOM frequently, etc., because Selenium is subject to random breakage at the WebDriver layer that hangs up your (often long-running) script, and so on.
Whatever the right answers to a next-gen Selenium are, attempting to guess the user's meaning based on Real English by something that is itself an imperfect developing technology like NLP is pretty obviously not the correct toolkit to provide that. Remember, a huge amount of the frustration on Selenium comes from not having the utilities needed to specify your intention once and for all -- the ambiguities of plain English will not help.
If your thing works, it will have to end up as a keyword based DSL like SQL. SQL is usually not so scary to newcomers because a simple statement is pretty accessible, not having any weird symbols or confusing boilerplate, but SQL has a rigid structure and it's parsed in conventional, non-ambiguous ways. "BrowserTestQL" (BTQL) would need to be similar, like "FILL FORM my_form WITH SAMPLE VALUES FROM visa_card_network;"
The biggest piece that's missing in Selenium is probably a new, consistent element hashing selector format; each element on the page should have a machine-generated selector assigned under the covers and that selector should never change for as long as the human is likely to consider it the "same element". The human should then use those identifiers to specify the elements targeted. I don't know how to do that.
The second biggest piece that's missing from Selenium is a consistent, stable WebDriver platform that almost never errors out mid-script; this may involve in some type of compile-time checking against the page's structure or something (which I know is hard/possibly impossible because of JS and everything else).
Totally agree with this. The concerning part for me is that ML is about making a "best guess" given some data. This means that your tests may pass one time and fail another - inconsistent tests aren't tests at all.
Your post gave me deja vu to an automation workflow I cobbled together a few months ago, which I found wonderfully productive for steering a bot: Vimperator keybindings. Selenium can use most of them right out of the box. It's a terrific navigation layer. For instance, pressing "f" enumerates all the visible links on the page and assigns to each one a keybinding. The keybindings are displayed in tooltips and can be trivially extracted with CSS. You can keep sending keys to the browser, and only links that contain the anchor text remain in the set of candidates. Of course the "hashing" of links to keybindings is completely relative to the viewport, so this won't satisfy you completely. But it was an idea I had randomly one day, as an alternative to the trapeze act of navigating through the boughs of the DOM tree, and lo and behold it worked nicely.
Sounds intriguing. There are a few tools for recording interactions with a webpage in order to replay the actions as a test (Ghost Inspector, Selenium IDE, etc) but they tend to be pretty horrible. I've been working on my own as a Chrome extension for a little while. What you're building sounds really interesting though, especially if it can deal with complex Javascript apps. Anything that can make developers more inclined to test things is a good thing.
Alternatively, one of the consequences of React is that the front-end can largely be unit-tested. You can at least get a pretty good idea that the page will render what you expect if it gets the data you expect.
And whether or not it gets that data is a unit test in another place.
I'm not a huge fan of React, or javascript, but having been forced to work in it, this is one of the wins.
You could test the front-end before react though? Also, one can still have a God component. I don't think React changed anything from a testability point of view. Well written modular code is well written modular code.
Frameworks like Knockout have been around for quite some time now. You don't have to use React to not depend on the DOM. There are many alternatives to "jQuery based front-end development" that's not React. Aurelia, for instance, happens to be an amazing framework in my opinion that's also highly testable and that's not React. Like I said, modular code is modular code. You can write good, modular code with just require js modules, and you can also write terrible monolithic React components.
Testability isn't the domain of the view layer.
Abstracting the DOM into a declarative DOM is great for performance, but doesn't lead to necessarily more testable code.
We are using ghostinspector mainly for it ability to compare screenshots between runs. I think the future of testing will be apps like this that don't require you to specify every little div but just record your actions and play them back and catch differences. Right now ghostinspector only takes a screenshot at the very end but they are adding a feature where you can take a shot anytime. As these apps get better at knowing what matters and what to ignore - all the better.
Yeah, I've looked at all the current tools and there's basically two types (other then just writing straight Selenium):
- Test recorders that aren't a great experience and output incomprehensible, brittle tests.
- Test composers that I can best describe as 90's SQL query builders for Selenium.
Complex JS apps are still a challenge for us (especially with some of the WTF code we come across in the wild), but we have a strategy in the works for them. We're still pre-release though. If you're interested, send me an email (donal@unravel.io) and I'll add you to our alpha list.
From reading the web page, I think an Unravel user won't need to specific, pre-defined language, in which case it's very deliberately not a DSL like Gherkin/Cucumber.
I've been interviewing junior/intermediate frontend candidates for the past few months now. 90% don't use any test tools, and their biggest complaint is their current employer forcing a new framework/library for the sake of being bleeding edge. While interesting to them, it turns out most of them really just want to see what they can do with vanilla JS.
Why you expect that junior developer (someone with very little or no development experience) will use a test tool or any other development technique? I expect that junior developer in software field should be able to program only.
I expect that junior developer in software field should be able to program only.
That is very often the case. It needs to change. Testing is a part of software development, and anyone who writes software should be aware of it. I feel the same way about documentation. And requirements. You can't write good software without knowledge of the processes that surround development. It isn't enough just to be able to write great code.
Maybe. Personally, I've come to think that you need the right tool for the right job.
If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong. If you're writing documentation no one will read, you may be doing it wrong.
They clearly do have a place though. As for maintaining a set of requirements... I appreciate there must be some environments where what is required is well understood and relatively stable. I'm not quite sure if I should look forward to working in such a place or not!
I appreciate there must be some environments where what is required is well understood and relatively stable.
Actually there isn't. Every project, no matter how it's managed, changes as it goes on. It has to because you learn and discover things along the way. That's why maintaining and understanding project requirements and how they've changed is incredibly important. If you don't keep on top of them then you end up with a project that wanders all over the place and never finishes. Or you build something that misses out important features. Or the project costs far too much. Requirements are not tasks, or epics, or things you're working right not. They're the goals that the tasks and epics work towards.
(My first startup was a requirements management app.)
> If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong.
Why should those 2 activities be compared? They do not compare: writing/running tests is about discovering the bug, not fixing it. You still need to fix it after you have done your testing activity.
The time spent writing/running tests should better be compared to the time spent in bug discovery without tests, i.e. how much you value the fact that your users are going to undergo bugs, what the consequences of the users hitting bugs are, what the process to report them is, etc.
You're right, unless you're at an extreme (zero automated testing, zero bugs found in the wild) it's much more nuanced as to what the balance is (or should be), but there is a balance.
I think there's somewhat of a gap between "junior" and "junior/intermediate", but given my understanding of university / bootcamp curricula, it's probably both the case that it's unrealistic to expect junior devs to have meaningful testing experience and that it's essential to make sure that potential hires have some awareness / positive attitude toward testing as part of software development.
>Why you expect that junior developer (someone with very little or no development experience)
I don't think thats the definition of a junior developer. Test tools are apart of building software, you should be hiring devs that have created projects that use tests of some sort, if not with the technology you're using.
>I expect that junior developer in software field should be able to program only.
I don't know how you can have little to no dev experience and know how to program.
Development and programming require different skills.
Developer need to know development cycle, automated testing, continuous integration, software life cycle, ticketing systems, source control systems, branching and merging, cooperating, etc.
Programmer need to know programming languages, patterns, algorithms, computer internals, effectiveness, profiling, debugging, etc.
Junior developer (in software filed)) has no or little experience in development, so junior developer is almost equal to a programmer, which causes lot of confusion.
Because we assume that developers are trained professionals, presumably with a CS or software engineering (or both) degrees, and that they've been properly trained in software development - which puts testing front and center.
Computer Science has absolutely nothing to do with software testing. Your software engineering classes will teach students about unit tests, but not much more.
If by 'testing' you really mean 'unit testing', as I suspect most junior engineers who claim testing experience do, then hope is already lost. The one saving grace is that there is enough churn in webdev that nothing lasts long enough to reveal how fragile it is.
Not if they take a good class in Test-driven development (TDD) - which I would recommend to students. The "science" behind it will outlive the practice churn.
Of course, if the will take a good class in test driven DEVELOPMENT, they will be developers. Development (problem solving with goal to create and support a product) is not same as programming (creating instruction for computer to do something).
Really? Where I work we expect knowing how to test code and being careful and incremental in our junior candidates more than anything. It's easier to teach someone how to code better than let them send anything to production with 0 tests.
To me as a web developer, the most exciting new development is react native (not react itself) - it's redefining the border between web and native apps in a way that cordova and xamarin never did.
As a web dev, the most exciting development I see is the rise of progressive web apps and a shift away from native apps in situations where web-like experince is more appropriate.
That said, I'll be thrilled if React Native gives rise to higher quality apps in situations where a native app is unavoidable (e.g. my bank's app).
I'm actually hard-pressed to think of any non-gaming interface that is better suited by a native app than a web app in 2017.
Five years ago native apps made a level of UX possible that was unheard of on the web, to say nothing of mobile. But today not only has HTML/js closed the gap, but whiz-bang native animations aren't impressive just on account of being novel anymore.
I'm feeling that React Native is just another artificial constraint we developers have to deal with. I would have preferred it if Facebook (and/or Apple, Google) would have pushed WebApps more instead. A web browser, with the right amount of love, would be more than capable of doing the stuff that React Native can do.
Google still is! Progressive web apps now have the ability to be installed "natively" on Android devices [0], meaning they show up in the app drawer like any other app rather than being limited to a home screen icon.
I think these are the future. Once they catch on with mainstream consumers, native apps won't stand a chance against the convenience of simply visiting a website to install/use. Plus, on the developer end, we finally have a true "write once, run anywhere" situation that doesn't involve any complex toolchains or hacky wrappers.
There's a market imo for a full solution that includes front end and entire backend including deployment, seemless scaling, seemless upgrading, seemless backups, seemless local dev, seemless staging, etc.
99% of web apps need the same features but most of this is still up to manually rolling your own.
I should be able to clone some repo, enter some DO/AWS/GOOG keys and push.
I've been using and loving Zappa[1] lately. Basically it lets you seamlessly deploy a flask app to AWS Lambda -- that solves your deployment, scaling, upgrading, staging, backups, etc. And local dev is just running the flask app locally.
I think the most exciting trend in web development is the rise in popularity of functional programming styles in front-end frameworks.
This makes the complexity problem much easier to solve, as the code is (should be) less likely to cause an unanticipated mutated state which can't be easily tested for.
I remember this same "rise in popularity" in 2005. Every few years functional advocates get all excited (last time was F# support in Visual Studio, all C# programmers were going to switch, naturally).
I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it. You might think it reduces complexity, but a lot of people feel it reduces comprehensibility.
> I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it.
Yep. It's not that I hate it, I just don't like it. The thing is that the functional-praisers are much more vocal about how they love it whereas people who write imperative do not care much about Haskell.
We are happy with LINQ and that's all 99% of us wants/needs.
Would you mind sharing an example or pointing me to a good guide that explains this concept? How does functional programming make a problem less complex than OO or imperative? I've heard this a couple of times, but the intuition has never quite clicked for me.
Its state. Functional programs tend to have less state as their output is the same for some given input. With things like jquery you quickly introduce state, say is some dropdown open, which your next function will have to check is true or false before proceeding. And so on.
I'll give it a shot; functional programming style--some languages enforce it, some languages merely have features that allow for it if the author is disciplined enough to do so (JavaScript is in the latter camp)--generally eschews mutable state and side effects, i.e. a variable `foo` that is declared outside of the function cannot be altered by the function. Some "pure" functional languages restrict all functions to a single argument. This may feel like an unnecessary constraint (and opinions vary), but one thing that can't be denied is that it keeps your methods small and simple; in any case, it can be worked around by applying a technique called "Currying"[1], which is named after the mathematician & logician Haskell Curry, not the dish (also the namesake of the eponymous functional programming language [the mathematician, not the dish]).
Because nothing outside of the function can be changed, and dependencies are always provided as function arguments, the resulting code is extremely predictable and easy to test, and in some cases your program can be mathematically proven correct (albeit with a lot of extra work). Dependency injection, mocks, etc are trivial to implement since they are passed directly to the function, instead of requiring long and convoluted "helper" classes to change the environment to test a function with many side effects and global dependencies. This can lead to functions with an excessively long list of parameters, but it's still a net win in my opinion (this can also be mitigated by Currying).
A side-effect (hah) of this ruleset is that your code will tend to have many small, simple, and easy to test methods with a single responsibility; contrast this with long and monolithic methods with many responsibilities, lots of unpredictable side effects that change the behavior of the function depending on the state of the program in its entirety, and which span dozens or hundreds of lines. Which would you rather debug and write tests for? Tangentially, this is why I hate Wordpress; the entire codebase is structured around the use of side-effects and global variables that are impossible to predict ahead of runtime.
There is much, much more to functional programming (see Monads[2] and Combinators[3]), but if you don't take away anything else, at least try to enforce the no-side-effects rule. A function without side-effects is deterministic; i.e. it will always give you the same output for any given set of inputs (idempotence comes for free). Because everything is a function, functions are first-class citizens, and there are only a few simple data structures, it becomes easy to chain transformations and combine them by applying some of the arguments ahead of time. Generally you will end up with many generalized functions which can be composed to do anything you require without writing a new function for a specific task, thus keeping your codebase small and efficient. It's possible to write ugly functional code, and it's possible to write beautiful and efficient object-oriented code, but the stricter rules of functional style theoretically make the codebase less likely to devolve into incomprehensible spaghetti.
Manning Publications has a book[4] on functional programming in javascript, which I own but haven't gotten around to reading yet, so I can't vouch for it. However, it does seem highly applicable.
Huge and seemingly often unacknowledged issue these days. And many attempted solutions seem to be adding fuel to the fire (or salt to the wound) by creating more tools (to fix problems with previous tools) ...
Red (red-lang.org) is one different sort of attempt at tackling modern software development complexity. Its like an improved version of REBOL, but aims to be a lot more - like a single language (two actually, RED/System and RED itself) for writing (close to) the full stack of development work, from low level to high level. Early days though, and they might not have anything much for web dev yet (though REBOL had/has plenty of Internet protocol and handling support).
If you are counting on people slowing down, you're in the wrong business.
You're asking the wrong question. It shouldn't be "how do we get people to slow down?" It should be, "how do we make rapid software development better?"
It's probably just a matter of time. Because all software ecosystems goes through its phases of "maturity" regarding testing.
Not too long ago (in human-years, not internet-years). Most node packages weren't built with unit testing. Now its quite common in the popular packages.
Website UI is probably the same thing. After all, it took us a really long time till we got the whole HTML5 spec finally stabilised.
So you will probably see the tipping point occur over the next 10 human years, or less.
And just like you I been really frustrated with the inadequacy of UI testing tools, especially with Selenium. So like @donaltroddyn, I set out to develop my own UI testing tool (https://uilicious.com/), to simplify the test flow and experience.
So wait around, you will see new tools, and watch them learn from one another. And if you want to dive right into it, we are currently running close beta.
Then, considering that if people don't spend sufficient time/money on proper testing, how much time/money do they spend on security? Probably even less.
Container orchestrators becoming mainstream is something I'm very excited about. Tools like DC/OS, Nomad, Kubernetes, Docker Swarm Mode, Triton, Rancher make it so much easier to have fast development cycles. Last week I went from idea, to concept, to deployed in production in a single day. And it is automatically kept available, restarted if it fails, traffic is routed correctly, other services can discover it, the underlying infrastructure can be changed without anyone ever noticing it.
This also brings me to Traefik, one of the coolest projects I have come across in the last months.
Traefik + DC/OS + CI/CD is what allows developers to create value for the business in hours and not in days or weeks.
I've been researching container orchestration recently and I personally don't see the incentive to jump into containers from an infrastructure perspective. I think using packer/vagrant/ansible is pretty easy and meets my needs. The orchestration overhead for containers seems like overhead I don't need just yet. So the big question I've been asking myself is at what point will a AWS AMI be less versatile than a docker container, assuming it originated w/ Packer and I can build images to other clouds with packer.
From a developer perspective I am very excited about containers and believe local dev w/ docker is warranted.
We mainly use Docker because it finally allows us to eradicate all the "Worked in dev" issues we had in the past. From an application perspective, Dev, Accept and Prod are identical.
Also, we deploy to production at least 4 times a day, the time from commit to deployable to production is about 30 minutes. And because it is a container it will start with a clean, documented setup (Dockerfile) every time. There is no possibility of manual additions, fixes or handholding.
I'm pretty excited for where it's taking things closer to the PaaS end of the spectrum. Been diving down that rabbit hole a bit in search of "easiest way for 2-3 devs to run a reliable infrastructure." Recently moved from EC2 to Heroku, which I'm pretty happy with, but not sure if will be more a stopgap or long-term. I like the direction OpenShift seems to be headed in.
We use DC/OS for all our stateless services, when we started looking at container orchestrators the bootstrap for DC/OS was very easy (automatic via cloudformation) and it was quite complicated for kubernetes.
We mainly use DC/OS to run more services on less instances.
Please take a look at the Cloud Native Computing Foundation (I'm the executive director) at cncf.io. We have a lot of free resources for learning more about the space.
The new development is that the software to run your own Heroku is becoming open source and easy to operate.
From an "I just want to get my app deployed" perspective it may still be best to just use Heroku. But from a "new developments in the field" perspective, the fact that I can rent a few machines and have my own Heroku microcosm for small declining effort is pretty cool.
I read the infoGAN paper yesterday. It blew my mind. https://arxiv.org/pdf/1606.03657v1.pdf. This is a way to do disentangled feature representation learning without supervision.
All these are definitely cool, but I think we're still a long way from leaving the "look at this cool toy" status and stepping into the "I can add value to society" status.
Furthermore, if we consider that most of these DL paper completely ignore the fact that the nets must run for days on a GPU to get decent results, then everything appears way less impressive. But that's just my opinion.
I love working in deep learning, but we still have LOTS of work to do.
Could you elaborate? After running for days / weeks/ months the output is a net that can do inference in seconds, or with some now-common techniques milliseconds with only small reductions in accuracy. These nets can then be deployed to phones to solve a rapidly increasing number of identification tasks, everything from plants to cancer.
The time from theoretical paper to widely deployed app is smaller in DL than in any other field I have experience with.
It's true that there aren't too many practical applications of GAN's yet, but I'd argue that transfer learning is already pretty powerful. It's fairly commonplace to approach a compute vision task by starting with VGG/AlexNet/etc and fine-tuning it on a relatively small dataset.
There is a LOT of investment in model training right now, with frameworks, specialized hardware (like Google's TPU), cloud services, etc., not to mention the GPU vendors themselves scrabbling like mad to develop chipsets that accommodate this more efficiently.
It's going to take less and less time and money to train a useful model.
GANs are a general tool -- they just happen to get a lot of attention for generating images of stuff. Here's an example for generating sequences [1]. The example is language oriented, but ultimately GANs are interesting because you can use them to build a generator for an arbitrary data distribution. This can have many applications in engineering (to take a random example -- generating plausible looking chemical structures under a certain set of constraints). As with any ML application, you need to quantify your tolerance for "inaccuracy" (in a generative setting, how well the generated distribution matches the true data distribution). This is simply an engineering trade-off and will vary based on the application.
The approach was applied without any real knowledge of art, even though it has been applied to the domains you mentioned I don't see why not.
[edit]: it is a lot harder to build a NN when there are very constraint rules. But it is also a lot easier to verify and penalize it and generate synthetic data.
It's all subjective, but as a data analyst I'm excited about probabilistic databases. Short version: load your sample data sets, provide a some priors, and then query the population as if you had no missing data.
Most developed implementation is BayesDB[1], but there's a lot of ideas coming out of a number of places right now.
Sounds like in many applications of machine learning (I'm thinking mainly of the swathes that name-drop it on a landing page, and probably usually mean 'linear regression') this could replace the brunt of the work.
e.g. store customer orders in the DB, and query `P(c buys x | c bought y)` in order to make recommendations - where `c buys x` is unknown, but `c bought y` occurred, and we know about 'other cs' x and y.
The way I see, the real utility comes from the fact that domain models such as those in a company's data warehouse are typically very complex, and a great deal of care often goes into mapping out that complexity via relational modelling. It's not just that c bought x and y, but also that c has an age, and a gender, and last bought z 50 days ago, and lives in Denver, and so on.
Having easy access to the probability distributions associated with those relational models gives you a lot of leverage to solve real life problems with.
Would you be so kind as to provide several introductory articles to probabilistic matching of data? Fuzzy searching, most-probable matches, things like that?
The agent modelling that I'm aware of is in simulation. I have a feeling that there would be a lot of interesting duality between the fields of agent based simulation and monte-carlo based probabilistic modelling, but I don't know enough about the former to say off hand.
ABM is an MC method, because different individual agents randomize their behavior based on distributions associated with possible courses of action defined by their agent type.
- End-to-end verification of compilers, e.g. CompCert and CakeML.
Programming languages:
- Mainstreamisation of the ideas of ML-like languages, e.g. Scala, Rust, Haskell, and the effect these ideas have on legacy languages, e.g. C++, Java 9, C#.
- Beginning of use of resource types outside pure research, e.g. affine types in Rust and experimental use of session types.
Foundation of mathematics:
- Homotopy type theory.
- Increasing mainstreamisation of interactive theorem provers, e.g. Isabelle/HOL, Coq, Agda.
Program verification:
- Increasing ability to have program logics for most programming language constructs.
- Increasingly usable automatic theorem provers (SAT and SMT solvers) that just about everything in automated program verification 'compiles' down to.
I work in CPU design. So I'd add that the tools for formally verifying CPUs have come a very long way in the last two years, and the next two years look like they will be very exciting indeed.
There is a big overlap in prover theory. HOL Light comes from Intelite John Harrison.
I don't know much about CPUs, but I suspect that one of the core problems of software verification, the absence of a useful specification, isn't much of an issue with hardware.
There's a new project to implement TLS in python [1] with the idea to have secure and verifiable code, but so far (to my knowledge) there's no formal tool involved in the verifiability aspect -- the approach is mostly around keeping the implementation as RFC-compliant as possible.
I'd be really interested in applying any of these techniques to a full TLS implementation.
Can you talk more about this? I even got THE book on this (haven't really read it yet though) and like I think I get the rough ideas but I'd be curious to hear what HTT means to you (lol).
Not the OP, but the cool thing about HoTT to a user of proof assistants is that it makes working with "quotients" easier. I put quotients in parentheses because really HoTT is about generalizing the idea of a quotient type. Quotients of sets/algebras are one of the core tools of mathematics and old school type theory doesn't have them so you have to manually keep around equivalence relations and prove over and over again that you respect the relations.
In HoTT, there is an extension of inductive types that allows you to, not just have constructors, but also to impose "equalities" so these generalized "quotients" really have first-class status in the language.
As far as "exciting developments" in HoTT, the big one right now is Cubical Type Theory [1], which is the first implementation of the new ideas of HoTT where Higher inductive types and the univalence axiom "compute" which means that the proof assistant can do more work for you when you use those features.
I just saw a talk about it and from talking to people about it, this means that it won't be too long (< 5 years I predict) before we have this stuff implemented in Agda and/or Coq.
Finally, I just want to say to people that are scared off or annoyed by all of the abstract talk about "homotopies" and "cubes", you have to understand that this is very new research and we don't yet know the best ways to use and explain these concepts. I certainly think that people will be able to use this stuff without having to learn anything about homotopy theory, though the intuition will probably help.
HoTT brought dependent types and interactive theorem proving to the
masses. Before HoTT, the number of researchers working seriously on
dependent type theory was probably < 20. This has now changed, and the field is developing at a much more rapid pace than before.
For someone wanting to begin involving program verification in a practical way to their day-to-day work, do you have any suggestions, resources, or anything?
How much do you know about modern testing, abstract interpretation, SAT/SMT solving? In any case, as of Feb 2017, a lot of this technology is not yet economical for non-safety critical mainstream programming. Peter O'Hearn's talk at the Turing Institute https://www.youtube.com/watch?v=lcVx3g3SmmY might be of interest.
Google "certified programming with dependent types", "program logics for certified compilers" and "software foundations class". Also, work through the Dafny tutorials, here: http://rise4fun.com/Dafny/tutorial
There are some ways in which these tools are not economical. There is currently a big gap. On one side of the gap, you have SMT solvers, which have encoded in them decades of institutional knowledge about generating solutions to formula. An SMT solver is filled with tons of "strategies" and "tactics" which are also known has "heuristics" and "hacks" to everyone else. It applies those heuristics, and a few core algorithms, to formula to automatically come up with a solution. This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.
It sucks when that's in your type system, because then your compilation speeds become variable. Additionally, it's difficult to debug why compiling something would be slow (and by slow, I mean sometimes it will "time out" because otherwise it would run forever) because you have to trace through your programming language's variables into the solvers variables. If a solver can say "no, this definitely isn't safe" most tools are smart enough to pull the reasoning for "definitively not safe" out into a path through the program that the programmer can study.
On the other end of the spectrum are tools like coq and why3. They do very little automatically and require you, the programmer, to specify in painstaking detail why your program is okay. For an example of what I mean by "painstaking" the theorem prover coq could say to you "okay, I know that x = y, and that x and y are natural numbers, but what I don't know is if y = x." You have to tell coq what axiom, from already established axioms, will show that x = y implies y = x.
Surely there's room for some compromise, right? Well, this is an active area of research. I am working on projects that try to strike a balance between these two design points, as are many others, but unlike the GP I don't think there's anything to be that excited about yet.
There's a lot of problems with existing work and applying it to the real world. Tools that reason about C programs in coq have a very mature set of libraries/theorems to reason about memory and integer arithmetic but the libraries they use to turn C code into coq data structures can't deal with C code constructs like "switch." Tools that do verification at the SMT level are frequently totally new languages, with limited/no interoperability with existing libraries, and selling those in the real world is hard.
It's unlikely that any of this will change in the near term because the community of people that care enough about their software reliability is very small and modestly funded. Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
> This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.
It sadly also depends a lot on the solver used and the way the problem was encoded in SMT. For a class in college I once tried to solve Fillomino puzzles using SMT. I programmed two solutions, one used a SAT encoding of Warshall's algorithm and another constructed spanning trees. One some puzzles the first solver required multiple hours whereas the second only needed a few seconds, while on other puzzles it was the complete opposite. My second encoding needed on hours for a puzzle which I could solve by hand in literally a few seconds. SAT and SMT solvers are extremely cool, but way incredibly unpredictable.
Absolutely! For another example, Z3 changes what heuristics it has and which it prefers to use from version to version. What happens when you keep your compiler the same but use a newer Z3? Researchers that make these tools will flatly tell you not to do that.
It's frustrating because this stuff really works. Making it work probably doesn't have to be hard, but researchers that know both about verification and usability basically don't exist. I blame the CS community's disdain for HCI as a field.
Thanks for taking the time to write the suggestions and detail the pain points that exist at the moment.
I had heard about Dafny but hadn't seen the tutorial!
> Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
When you're saying they're orthogonal, are you effectively saying that researchers generally don't have 'strong programming skills' (as far as actually whacking out code). If so, how feasible would it be for someone who is not a researcher, but a good general software engineer, to work on the developer tools side of things?
There's more I could write about researchers and their programming skills, but to keep it brief: researchers aren't directly rewarded for being good programmers. It's possible to have a strong research career without really programming all that much. However, if you are a strong programmer, some things get easier. You aren't directly rewarded for it though. For an extreme counter-example, consider the researchers that improve the mathematics platform SAGE. Their academic departments don't care about the software contributions made and basically just want to see research output, i.e. publications.
I think that this keeps most researchers away from making usable tools. It's hard, they're not rewarded for making software artifacts, they're maybe not as good at it as they are at other things.
I think it's feasible for anyone to work on the developer tools side of things, but I think it's going to be really hard for whoever does it, whatever their background is. There are lots of developer tool development projects that encounter limited traction in the real world because the developers do what make sense for them, and it turns out only 20 other people in the world think like them. The more successful projects I've heard about have a tight coupling between language/tool developers, and active language users. The tool developers come up with ideas, bounce them off the active developers, who then try to use the tools, and give feedback.
This puts the prospective "verification tools developer" in a tight spot, because there are only a few places in the world where that is happening nowadays: Airbus/INRIA, Rockwell Collins, Microsoft Research, NICTA/GD. So if you can get a job in the tools group at one of those places, it seems very feasible! Otherwise, you need to find some community or company that is trying to use verification tools to do something real, and work with them to make their tools better.
Compilers, in particular optimising compilers are notoriously buggy,
see John Regehr's blog. An old dream was to verify them. The great
Robin Milner, who pioneered formal verification (like so much else),
said in his 1972 paper Proving compiler correctness in a mechanized
logic about all the proofs they left out "More than half of the
complete proof has been machine checked, and we anticipte no
difficulty with the reminder". Took a while before X. Leroy filled in
the gaps. I though it would take a lot longer before we would get
something as comprehensive as CakeML, indeed I had predicted this
would only appear around 2025.
It sucks when that's in your type system
Agreed, and that is one of the reasons why type-based approaches to
program verification (beyond simplistic things like
Damas-Hindley-Milner) is not the right approach. Speedy dev tools are
vital. A better approach towards program verification is to go for
division of labour: use lightweight tools like type-inference and
automated testing in early and mid development and do full
verification only when the software and specifications are really
stable in an external system (= program logic with associated tools).
making usable developer tools is much,
much harder than doing original research.
I don't really agree that the main remaining problems are of an UI/UX nature. The problem in program verification is
that ultimately almost everything you want to automate is NP-complete
or worse: typically (and simplifying a great deal) you need to show A
-> B where A is something you get from the program (e.g. weakest
pre-condition, or characteristic formula), and B is the specification.
In the best case, deciding formulae like A -> B is NP-complete, but
typically much worse. Moreover, program verification of non-trivial
programs seems to trigger the hard cases of those NP-complete (or
worse) problems naturally. Add to that the large size of the involved
formulae (due to large programs), you have a major theoretical problem
at hand, e.g. solve SAT in n^4, or find a really fast approximation
algorithm. That's unlikely to happen any time soon.
We don't even know how effectively to parallelise SAT, or to make SAT
fast on GPUs. Which is embarrassing, given how much of deep
learning's recent successes boil down to gigantic parallel computation
at Google scale. Showing that SAT is intrinsically not parallelisable,
or even just GPUable (should either be true), looks like a difficult
theoretical problem .
as a researcher, I both
am not rewarded for
I agree. But for the reasons outlined above, that is right: polishing UI/UX is something the commercial space can and should do.
he community of people that care enough
about their software reliability is very
small and modestly funded.
This is really the key problem.
Industry doesn't really care about program correctness that much
(safe for some niche markets). The VCs of my acquaintance always tell me: "we'll fund your verification ideas when you can point to somebody who's already making money with verification". For most applications type-checking and solid testing can
get you to "sufficiently correct".
You can think of program correctness like the speed of light. You can
get arbitrarily close but the closer you get the more energy (cost)
you have to expend. Type-checking and a good test suite already catch
most of the low-handing fruit that the likes of Facebook and Twitter need to worry about . As of 2017, for all but the most safety
critical programs, the cost of dealing with the remaining problems
does is disproportionate in comparison with the benefits. Instagram or
Whatsapp or Gmail are good enough already despite not been formally
verified.
Cost/benefit will change only if the cost of formal verification is
brought down, or the legal frameworks (liability laws) are changed so that software
producers have to pay for faulty software (even when it's not an
Airbus A350 autopilot).
I know that some verification companies are lobbying governments for such legislative changes. Whether that's a good think, regulatory capture or something in-between, is an interesting question.
Another dimension of the problem is developer education. Most (>99%) of contemporary programmers lack the necessary
background in logic even to think properly about program
correctness. Just ask the average programmer about loop invariants
and termination order. They won't be able to do this even for 3-line
programs like GCD. This is not a surprise as there is no industry demand for this kind of knowledge, and will probably change with a change in demand.
Thanks for the long reply! I don't have the time continue this. I mostly agree with what you say.
I do think that making verification tools easier is something that researchers could and should be thinking about. Probably not verification and logic researchers directly, but someone should be carefully thinking about it and systematically exploring how we can look at our programs and decide they do what we want them to do. I have some hope for the DeepSpec project to at least start us down that path.
I also have hope for type-level approaches where the typechecking algorithms are predictable enough to avoid the "Z3 in my type system" problem but expressive enough that you can get useful properties out of them. I think this is also a careful design space and another place that researchers lose because they don't think about usability. They just say "oh, well I'll generate these complicated SMT statements and discharge them with Z3, all it needs to work on are the programs for my paper and if it doesn't work on one of them, I'll find one it does work on and swap it out for that one." Why would you make a less expressive system if usability wasn't one of your goals?
I'd be interested in your (brief if you have not time) suggestions what kind of novel interfaces you have in mind. I have spent too much time on the command line to have any creative thoughts about radically different interfaces for verification.
My field is hypnosis, or more generally, "changework" which is jargon, but essentially hacking the psychology of clients to get desired outcomes.
There's been a renaissance of study in placebo effects, meditation, and general frameworks for how people change belief for therapeutic purposes or otherwise, but to me, that's been going on for a long time and is more about acceptance than being a new development.
One of the most exciting developments that's been coming out recently is playing with language to do what's called context-free conversational change.
Essentially, you can help someone solve an issue without actually knowing the details or even generally what they need help with. It's like homomorphic encryption for therapy. A therapist can do work, a client can report results, but the problem itself can be a black box along with a bit of the solution as well since much of the change is unconscious.
It works better with feedback (a conversation) of course, but often can be utilized in a more canned manner if you know the type of problem well enough.
I'm working on putting together an automated solution that's based on some loose grammar rules, NLP, Markov chains, and anything else I can use to help a machine be creative in language to help people solve their own problems, but as a first step as a useful tool for beginner therapists to help them get used to the ideas and frameworks with language to use.
So essentially, I'm getting a good chunk of the way toward hacking on a machine that can reliably work on people's problems without having to train a full AI or anything remotely resembling real intelligence, just mimicking it.
Before you go thinking, "Didn't they do that with Eliza?" Well yes, in a way, but my implementation is using an entirely different approach.
The place I learned it was hypnosistrainingacademy.com and where he derived his teachings from was a man named John Overdurf who had a program called beyond words. I'd start with Igor's material first. He calls his version (which is based upon, but wholly distinct from beyond words) mind bending language. There are even card decks and training programs to consume. Or free articles if you wish to check it out.
I wasn't interested in long citations or garnering proof of my work in particular with training a machine to do this work. I simply wished to add to this thread and did so, in order to show someone out there, maybe even you, what else is going on that is exciting in my little corner of the world.
I'm not that good of a programmer, so it's not in a state that it does work yet. I hope my original comment didn't suggest otherwise, but let me be perfectly clear here: I have no working machine implementation that can do what I want yet. It can work with simple canned responses like Eliza, but it's not enough. I am working on employing all of the techniques and tools mentioned, but progress is slow.
However, this is work and change I employ daily with my clients professionally and I can assure you that it does work.
You don't even have to take my word for it.
Consider....seriously consider: who would you not be if you weren't you?
If you thought about that one for a sec and felt a little spaced out for a second, you did very well.
If you came up with something quickly like "me" and didn't really actually consider the question, allow me to pose another to you. Again, seriously consider this. Read it a few times. Imagine emphasis on different words each time.
Who are you not without that problem you are interested in solving?
This work can be made more difficult by text only and seriously asynchronous communication, which is why I mentioned it being easier within conversation.
If you are interested in more, google "mind bending language" or "attention shifting coaching" and find Igor Ledochowski and John Overdurf. Their work has helped me change the lives of thousands.
Depending on how you parse the sentence, either "someone else" or "that's just a paradox". Essentially the concept of "me" as an entity is fundamentally flawed.
Playing with the meanings of "me" and "not me" in a subjunctive form doesn't make the question very interesting (as in non-trite), to be honest. I guess the intent is not to be fresh but to be thought-provoking or similar, or setting the listener in a certain mindset? Still, sets my mind in the "meh" state.
> Who are you not without that problem you are interested in solving?
I'm not my problems. I'm also not not-my-problems. Actually I am not (I isn't?). I don't see how this helps with anything, though.
Either way, your questions pose (to me) more philosophical thinking (which I already do, anyways) than mindbending or whatever. Maybe my mind is already bent... and I have to say it didn't go very well ;)
A long time ago I came to the conclusion that these questions are merely shortcomings in how language and cognition works. Metaphysics, ontology (and even epistemology) are just fun puzzles with no solution, which I'm ultimately obliged to answer with "who the f--- cares".
Kant was right.
Not that anything you said is directly contradicted by Kant. In fact I'd say it fits very well within the idea that "human mind creates the structure of human experience". It's just never been really useful to me in any way. I really, really, want to know more of (and even believe in) your changework but, often being presented with vague ideas, no one has ever made a solid case on how it isn't, as GP said, charlatanry.
No, it's not true of most posts in this thread. Many posts in here have details. Admittedly, not enough to take the audience and make them expert enough to understand the breakthroughs. But enough to show there is something real being described.
Re: friendliness -- I believe I expressed the opinion that someone is a charlatan in as friendly of a manner as is possible.
Most certainly. This current work I am doing builds upon the mountaintops of huge amounts of work built by two men who base their work on yet more huge foundations.
As far as context-free therapy goes, that's a bit of an advanced subject, but can be learned and mastered through some of their programs.
The key tenets are simple though. As a model, consider that human language builds around 5 concepts: Space, Time, Energy, Matter, and Identity. These 5 also map cleanly to questions (5Ws and H) and language predicates in human language. Space is Where, Time is When, Energy is How, Matter is What, and Identity carries two with Why and Who.
Every problem you've ever had is built up of some combination of the 5 in a specific way, unique to you.
The pattern of all change is this:
1) Associate to a problem, or in other words, bring it to mind.
2) Dissociate from the problem, or basically get enough distance from it so that you can think rationally and calmly. Similar to a monkey not reaching for a banana when a tiger is running after it, your brain does not do change under danger and stress well. It can, but that usually leads to problems in the first place.
3) Associate (think about, experience) a resource state. Another thought or experience that will help with this one, for example if someone were afraid of clowns, I'd ask a question like, "What clowns fear you?" It usually knocks them out of the fear loop for a second.
4) While thinking about the resource, recall the problem and see how it has changed. Notice I said has changed. It always changes. You can never do your problem the same again. Will this solve things on the first go? Maybe. Maybe not, but it's enough to get a foothold and a new direction and loop until it's done.
Which is what makes this fun and exciting to do in person and fun and exciting to help teach a machine to mimic it to.
It seems like the missing step, right between 3 and 4, is "and then a miracle occurs."
That's why I made my original comment. Maybe you're not a charlatan, in which case I'd have to conclude you're thinking irrationally and have been deceived by some form of magical thinking.
You have not proposed any mechanism by which these steps can form a consistent treatment for problems that individuals have struggled with for years. You've merely declared that it will, and a whole lot of faith is required.
Other posts in this thread mostly propose a mechanism, even if we readers don't have the prerequisites to fully understand it. For example, consider the proposal that machine learning could be applied to the mundane tasks a radiologist performs. It may or may not pan out, but it has a basis.
I come to changework (coaching) from a slightly different direction; I'll share what I see as underlying mechanisms in case it helps.
Basically, what we do is based on how we see things. If we can change how we see things, then new actions & results become available.
Then the question just becomes, how can we change how we see things.
If how we see something comes from what we've experienced, then introducing a new experience can have us see it differently.
If how we see something comes from what we think about it, then we can introduce a new thought about it.
The point being to change the internal mental model related to the thing, so that we see it differently, we experience it differently, it occurs for us differently than it did before.
In the case above, step 3 introduced a new thought and internal experience related to the thing, and thus the step between 3 and 4 is, "their internal mental model, connected to the thing, changed".
Again, the mechanism (and the missing step) becomes, "change how we see & experience something, change our internal model relating to it". And then, some possibilities for triggering that include having a new thought about it, having a new experience about it; and various techniques can exist for introducing those experiences or thoughts.
At least, that's how I see it (how it occurs for me, how I've experienced it).
Not really my main field, but in web technology it seems that severless architectures such as Amazon Lambda will be a pretty big game changer in the near future:
Lambdas are lightweight function calls that can be spawned on demand in sub-millisecond time and don't need a server that's constantly running. They can replace most server code in many settings, e.g. when building REST APIs that are backed by cloud services such as Amazon DynamoDB.
I've heard many impressive things about this way of designing your architecture, and it seems to be able to dramatically reduce cost in some cases, sometimes by more than 10 times.
The drawback is that currently there is a lot of vendor lock-in, as Amazon is (to my knowledge) the only cloud service that offers lambda functions with a really tight and well-working integration with their other services (this is important because on their own lambdas are not very useful).
I have to admit, I'm pretty bearish when it comes to serverless. Mostly because it's an abstraction which leaks to hell and back.
Your input is tightly restricted, and with Amazon in particular, easy to break before you even get to the Lambda code (the Gateway is fragile in silly ways). Your execution schedule is tightly controlled by factors outside your control - such as the "one Lambda execution per Kinesis shard". You can be throttled arbitrarily, and when it just fails to run, you are limited to "contact tech support".
In short, I can't trust that Lambda and its ilk are really designed for my use cases, and so I can only really trust it with things that don't matter.
I'm bearish on it right now, though conceptually it's a fantastic idea which just has quite a way to go before it's ready for prime time. I definitely think a lot of people have jumped the gun by pushing serverless before it's really ready for the outside world.
But the reality is that they don't, with cold-start times upward of 30 seconds. If you use them enough to avoid the cold-start penalties, then you're better of with reserved instances because lambdas are 10x the price. If you can't handle the 30 second penalty then you're better off with reserved instances because they're always on. If you have rare and highly latency-tolerable events, then use lambda.
To add, you don't really need that many lambda call for it to be the same price to just have a small always running instance. You can still use it lambda style if you wish, with automatic deployment.
I am pretty bullish on "serverless". I really do think it's the future. It fulfills the vision of Cloud Computing. But it's early days and I wouldn't yet bet the ranch on it. I am doing a new project with Azure Functions and so far am quite happy with the offering.
I don't know, to me it looks like serverless benefits, (ie theoretical lower cost) are not worth the downsides, (essentially complete vendor lockdown), but would love to hear why am wrong :-)
If you go serverless - specifically AWS Lambda, then you must be comfortable with using old and out of date programming environments, as these are what AWS Lambda supports.
In the Bitcoin space, I'm most excited about the Lightning Network [0][1] and MimbleWimble [2][3], which are in my view the two most groundbreaking technologies that really push the limits of what blockchains are capable of.
It has to do with the need for increased block sizes. Right now, each block (chunk of validated transactions) can only be 1MB in size. This restricts the total throughput of the network, but keeps the total size of the blockchain down and the growth rate low.
The original expectation was to gradually increase the block size to increase capacity as more users joined the network, eventually transitioning most users to "thin" clients that don't store the (eventually enormous) complete blockchain.
The Core devs right now feel that the current situation (every node a full peer with the complete chain, but maxed out capacity and limited throughput) is preferable for a number of reasons including decentralization, while the Unlimited devs feel that it's time to increase the block size in order to increase capacity and get more users on the network, among other things.
Decisions like this are usually decided by the miner network reaching consensus, with votes counted through hashing power/mined blocks. I'm not sure where things stand at the moment, but it's been interesting to observe.
I understand it's become a rather contentious topic in the community.
I'm not the best person to ask, and I don't fully understand segwit, but I think it's the Core devs (partial) solution to the problem of scaling up the network without increasing block size.
IIUC, segwit makes certain kinds of complicated transactions easier to handle (ones with lots of inputs/outputs), possibly allowing more transactions to fit in less space, and lays useful groundwork for overlay networks like Lightning. I think the thinking is that overlay networks can be fast, and eventualy reconcile against the slower bitcoin network.
Unlimited would rather just scale up the bitcoin network in place, instead of relying on an overlay network.
You'd probably get better information from bitcoincore.org and bitcoinunlimited.info, or the subreddits /r/bitcoin and /r/btc (for core and unlimited, respectively, they split after moderator shenanigans in /r/bitcoin).
With the brutalist movement something new started. People went back to code editors to create websites by hand skipping third-party, non-web-native user interface design tools prefilled with common knowledge making websites looking uniform.
The idea of design silos and brand-specific design thinking is dropped: no more bootstrap, flat design, material design, etc.
It's like back to the nineties and reinventing web design. You start from scratch, on your own, and build bottom up without external influence and or help.
It's about creativity vs. the bandwagon, about crafting your own instead of putting together from popular pieces.
From skimming the thumbnails in the linked site, I don't know that I'd call HN or reddit brutalist designs, maybe "classic" or "skeletal".
The emphasis is on dense content with simple links, and there's not a lot of "live" interactive content on the page. I don't find either site to be particularly ugly or visually offensive, contrary to many of the linked "brutalist" sites.
I'd love to see more sites in the HN/reddit model (here's hoping reddit's coming desktop redesign doesn't lose that), but I wouldn't want to actually use more brutalist sites (outside of individual creative expression, anyway).
I think Craigslist is probably a good example of brutalism. But that website you provided doesn't even have a mobile version, which comes across as inexperienced, not "minimalist"
Aerospace Engineer - Enhanced Flight Vision Systems
TLDR: Fancy fused infrared (LWIR/SWIR) and visible spectrum camera systems may 'soon' be on a passenger airliner near you.
Using infrared cameras to see through fog/haze to land aircraft has been happening for a while now, but only on biz jets or on FedEx aircraft with a waiver. The FAA has gained enough confidence in the systems that they have just opened up the rules to allow these camera systems to be used to land on passenger aircraft.
Combine that with the fact that airports are transitioning away from incandescent lights to LEDs (meaning a purely IR sensor system is not longer enough), and you get multi-sensor image fusion work to do and a whole new market to sell them to.
Here is a blog post (from a competitor of ours) talking about the new rules.
Say with a car that has a heads up display for night vision, if it had an SWIR sensor and IR lights, can that cut through fog too? Or is it the LWIR that is able to do that?
SWIR sensors are there for hot burning lights. LWIR (aka thermal) sees things that are every day temperatures. Both wavelengths have better transmittance through fog than visible light so we say those sensors can 'see through' fog. The physics comes down to the wavelength of the light vs the size of the molecules of the medium the light is trying to get through [1].
Another fun part is that fog at one airport can be different than fog at another, so while the weather conditions at both locations may say visibility is "Runway Visible Range (RVR) 1000ft", that is for a pilots eyes, and the same camera may work just fine at one location and not at all at the other.
The era of gravity wave astronomy is starting to begin in earnest with LIGO's new run on data collection. It'd be offline getting upgraded from 2016/02 to 2016/11 and is now even more sensitive
[http://www.ligo.org/news/]
PUFR http://pufr.io (IoT security startup), and for the last few years I've been doing R&D on the design of a graph computing model that unifies some of the ideas above.
As an Android developer, I'm most excited about instant apps. If it works as marketed, you won't have to hold on to the apps which you use maybe once or twice a week. Instead, you'll be able to download the required feature/activity/view or whatever else on the fly.
I'm not sure I did justice to instant apps, because there's a language barrier playing in. But here's an example: I use the Amazon app maybe once every 2 weeks, and yet it's one of the apps consuming most amount of memory on my phone due to background services. After Amazon integrates instant apps, I'll be able to delete the app, and just google search for the product through my phone. The Google search will then download the required page as an app, giving me the experience of an app, whilst not even having it on the phone.
Also, to answer your question, not it's not the same as a website because it will be a native Android app with the ability to communicate with the Android OS, like any other Android app.
The possibility of things — in terms of improved UX — that you can accomplish with instant apps are infinite. It all comes down to how you want to use it.
I watched the video and honestly, this is terrible.
If I'm clicking on a link I want to open it with my browser, not with some app. I find this extremely annoying with facebook and even the news carousel already.
I can't open new tabs, copy the url, switch to other tabs like I would in the normal browser. This is extremely confusing and I don't how this benefits me in any way.
I couldn't agree more. I'm excited about the idea of streaming apps, but the execution here is terrible. How do you control which url opens which app? If somebody sends you a reddit or hn link, which app does it run? There are dozens out there for both! The whole point of the app is not to have to manage these things, but the only way I can see this working is if you have yet another area in settings to manage which apps open for which links.
A better implementation would have been to have a popup with a list of compatible apps to run, including an option to run it in a browser like any normal link.
I really hope the NFC bit is opt-in by default. I don't want to have to manually disable it every time I get a new phone. In fact, even if I've opted into having the SF Park app run when I'm near a parking meter, I want the option to "reject" it just like I do when I get an incoming phone call.
I like that even less. If you haven't manually added an app association, it defaults to opening the app specified in the digital assets file without any notification to the user. This is the opposite of a sane default. The first time an app wants to run, it should always let the user decide whether they want to run the app or continue using the browser. Otherwise, this is a recipe for malware.
BTW You can do this kind of thing now, with classloading. I'm doing developmemt on my phone, and it's far faster to load new class versions than install a new app version. Google will have a framework around this.
Full OS access could mean permissions per page - could be awkward or ok. Much of the app vs. webpage debate here is the same as always - though offline advantage is gome.
I'm loading a GLSurfaceView, which extends SurfaceView, which extends View. So, yeah.
There shouldn't be any problem classloading an Activity, use reflection to instantiate, and treat as you would a runtime Activity (as opposed to being declared in AndroidManifest.xml). But I haven't actually done this; could be some gotchas in incorporating runtime a Activity into the GUI.
IIRC google had a few hits on classloading Activities.
It's still a native app, you just don't have to explicitly download it, or download the whole thing. It still runs native code and can take advantage of Android-specific features.
I'm curious about the "just a website" bit as well. It's slow going, but new features like service workers, web workers, web sockets, webrtc, seem to be closing the gap between "website" and app.
Is there some point where websites start to significantly displace apps?
Ah, yes. Guess I should have said iphone / android native apps, especially ones that depend on network data such that the native app wouldn't be any faster than a web site.
I don't get the appeal, for example, of native apps for things like airlines, amazon, ebay, etc.
Speed, responsiveness and the ability to work offline. These things sort of work for webapps on the desktop because desktops aren't cpu and memory restricted. Phones are. As a result, webapps are just too damned slow and frustrating to use on a phone.
"Is there some point where websites start to significantly displace apps?"
It seems to me that this is already slowly happening and this instant app thing is the reaction. After all, Google would lost the control if everybody started to use the browser.
I do electronic music. The rise of platforms like Bandcamp and Patreon, and the abundance of high quality free/inexpensive tools and guides is raising the bar for quality in independent music, and making it easier for more people to get paid in whatever niches they prefer (vs. going for a mass audience).
"The rise of platforms like Bandcamp and Patreon, and the abundance of high quality free/inexpensive tools and guides is raising the bar for quality in independent music"
No easy solution here, what is the ^Bandcamp of concert venues^? [0] Is there a venue problem of, "Where do you play?"
[0] I know the solution is a political one due to land usage, sound restrictions and venue size.
Nope. I have a MIDI keyboard, but it doesn't work half the time, so I rarely bother with it. You can do everything inside the DAW with the MIDI roll or notation editor.
I'd recommend selecting a DAW and learning it. Really learn it. We have countless great tools in music production available - it mostly comes down to how well you can handle them.
Personally, I love and recommend Ableton Live which features an easy to use interface, workflow, lots of options for experimentation and extensions, great and large community as well. Good choice for beginners and experts alike. Plus, with Ableton Push you have the option to get an excellent hardware controller that is tailored to your DAW, but it isn't something you'd need from the start.
Alternatively, you almost can't beat Logic on price considering its features and performance. I'd say it is more complex, but that's subjective.
Both Logic and Live (Suite version) offer a complete solution, including high quality instruments, synths and effects.
Hardware is optional, but a simple midi keyboard for less than a 100 bucks will help a lot.
I plan to switch to Ableton once I earn enough from music to pay for it, but that's because it's made for the kind of music I do. Everyone I know uses it, so it's easy to find advice and tutorials.
For now, Reaper and a few free VSTs will do. I find myself bumping up against the fact that Reaper was made for live music and its devs are understandably keeping the focus there, even though they do good work on the MIDI roll. They always nail down a few irritants in each release.
You'll go further and have an easier time if the community around your tools makes the same kind of music. Choosing tools mostly comes down to what you want to do. If you want to do electronic, Ableton is a good bet.
While I'm positive you know that, don't make the mistake of limiting Ableton to electronic music - it has obvious strengths in that department, but there's really no limit to the genre and style of what can be done with DAWs like Live, Logic and Co.
By the way, if Ableton remains too expensive for your taste, there's always Bitwig, which isn't quite as mature and has a much smaller, but growing community, yet it's very similar to Ableton's approach to music production.
Safe selective destruction of cells via their internal chemistry, not surface markers, via uptake of lipid-encapsulated programmable suicide gene arrangements.
With the right program and a distinctive chemistry to target in the unwanted cell population, this flexible technology has next to no side-effects, and enables rapid development of therapies such as:
1) senescent cell clearance with resorting to chemotherapeutics, something shown to extend life in mice, reduce age-related inflammation, reverse measures of aging in various tissues, and slow the progression of vascular disease.
2) killing cancer cells without chemotherapeutics or immunotherapies.
3) destroying all mature immune cells without chemotherapeutics, an approach that should cure all common forms of autoimmunity (or it would be surprising to find one where it doesn't), and also could be used to reverse a sizable fraction of age-related immune decline, that part of it caused by malfunctioning and incorrectly specialized immune cells.
And so forth. It turns out that low-impact selective cell destruction has a great many profoundly important uses in medicine.
For 3) does destroying all mature immune cells also get rid of all immunities that the patient has gained throughout life from vaccines, previous illness, etc? Would it make the patient very fragile, not to have gone through gaining those immunities at a young age?
Revaccination, yes, definitely necessary in the idealized case of a complete wipe of immune cells. But that's a small problem in comparison to having a broken immune system. Just get all the vaccinations done following immune repopulation.
Part of the problem in old people is that they have too much memory in the immune system, especially of pervasive herpesviruses like cytomegalovirus. Those memory cells take up immunological space that should for preference be occupied by aggressive cells capable of action.
Another point: in old people, as a treatment for immunosenescence, immune destruction would probably need to be paired with some form of cell therapy to repopulate the immune system. In young people, not needed, but in the old there is a reduced rate of cell creation - loss of stem cell function, thymic involution, etc. That again, isn't a big challenge at this time, and is something that can already be done.
At present sweeping immune destruction is only used for people with fatal autoimmunities like multiple sclerosis because the clearance via chemotherapy isn't something you'd do if you had any better options - it's pretty unpleasant, and produces lasting harm to some degree. Those people who are now five or more years into sustained remission of the disease have functional immunity and are definitely much better off for the procedure, even with its present downsides, given where they were before. If the condition is rhematoid arthritis, however, it becomes much less of an obvious cost-benefit equation, which is why there needs to be a safe, side-effect free method of cell destruction.
From one of the companies working with an implementation:
----------
"Our approach is quite different from most other attempts to clear these cells. We have two components to our potential therapy. First, there is a gene sequence consisting of a promoter that is active in the cells we want to kill and a suicide gene that encodes a protein that triggers apoptosis. This gene sequence can be simple, like the one that kills p16-expressing cells, or more complicated, for example, incorporating logic to make it more cell type specific. The second component is a unique liposomal vector that is capable of transporting our gene sequence into virtually any cell in the body. This vector is unique in that it both very efficient, and appears to be very safe even at extremely high doses."
"There's a subtle but profound distinction between our approach and others. The targeting of the cells is done with the gene sequence, not the vector. The liposomal vector doesn't have any preference for the target cells. It delivers the gene sequence to both healthy and targeted cells. We don't target based on surface markers or other external phenotypic features. We like to say "we kill cells based on what they are thinking, not based on surface markers." So if the promoter used in our gene sequence (say, p16) is active in any given cell at the time of treatment, the next part of our gene sequence - the suicide gene - will be transcribed and drive the cell to apoptosis. However, if p16 isn't active in a given cell, then nothing happens, and shortly afterwards the gene sequence we delivered would simply be degraded by the body. This behavior allows our therapy to be highly specific and importantly, transient. Since we don't use a virus to deliver our gene sequence, and our liposomal vector isn't immunogenic, our hope is that we should be able to use it multiple times in the same patient."
How do you propose to infect cells deep inside solid tumors?How do you target cells that have lost cell surface markers? Your example, p16, is used in many cells intermittently, or only in certain microenvironments . How do you deliver? IV, topical, tumor injection (if so, what about needle tract seeding)?
I think web assembly is the piece most likely to change front end development in a meaningful way. A little hard to see now, as the WASM component has no direct access to the DOM, no GC, and no polymorphic inline cache. So, dynamic languages are hard to do with WASM. Once those gaps are closed, however, it should be interesting to see if javascript remains the lingua franca or not.
For a C# developer into microservices, there's a lot to be excited about.
.Net Core: Finally, cross platform .Net. Deploying .Net services to Linux is a dream come true. Can't wait for the platform to stabilize.
Windows Server 2016: For "legacy" applications forced to stay on Windows, containers and Docker on Windows is a game changer. One step closer to hopefully making Windows servers somewhat manageable.
I've toyed with it on and off ever since the first beta. It's still not good enough unfortunately. I need a very simple way to configure an instance, version it, and deploy it in minutes. When that works frictionless I'm all over getting it pushed in my org, and once the tooling in VS supports it, it will be easier to get other developers to do it.
I'm a materials engineer these are two interesting developments in my field at the moment
Metamaterials: Essentially a material engineered to have a unique property. By precisely controlling a materials structure you can influence how it interacts with electromagnetic waves, sound etc. You can create materials with unique properties such as a negative refractive index over certain wavelengths. It's kind of a novelty but people are building "cloaking devices" using metamaterials i.e. bending electromagnetic waves around a material in certain ways to make it appear invisible to certain frequencies.
Graphene (and other 2D materials): These materials are a relatively recent discovery, graphene was confirmed in 2004 and it has a number of interesting properties. In particular its electrical and thermal properties make it promising for a number of applications. I think it could possibly find applications in batteries, transistors and capacitors. At the moment it is a very expensive material to manufacture which makes it (currently) unsuited for commercial applications. There is a heap of active research involving graphene at the moment.
I'm honestly just super-pumped about any artificial intelligence system that's starting to get an intuition of physics.
Google's Deep Mind put out some kind of cool stuff recentely [1], but I'm mostly just excited for anything that Ilker Yildirim [2] is doing with Joshua Tanenbaum, because it seems to triangulate more with how humans think about physics. When I was at CogSci 2016, Joshua mentioned combining this with analogical reasoning and that also sounded super cool, even though I'm not sure how to the two fit together.
On the web development part of my job, I'm excited about Elixir / Phoenix getting more and more mindshare. People I talk to are actively trying Elixir out and evaluating it as the tool of choice for their next projects.
On the networking side of things, I'm excited about network virtualization and the potential that tools like Docker and Kubernetes give to virtualizing large and complex network topologies.
And as an employee of an IT-heavy enterprise, seeing DevOps becoming a thing makes me happy, even if adoption is slow and expectations are high. It's still better than waiting 6 months to get a couple of VMs to deploy my projects to...
In my field of quantum information processing, the current hype is all about "Quantum Supremacy". The field currently has its sights set on the goal of producing an experiment where a quantum system performs a computation faster than any known computer can --- perhaps computes something that no current computer can compute in a reasonable amount of time. Unlike much of the work in the field up to this time, this requires a cray amount of engineering, more than a typical lab can undertake if they hope to be publishing interesting results in the meantime. My hypothesis is that this will likely happen from either a company (IBM, Google) or a government lab (if they will be allowed to publish).
We're porting a sizable application to .Net Core so we can be on Linux and save cost and time on instance launch.
I'm writing an in-depth blog post series about the process because I haven't found any significant migration stories. I'm hoping it will help a lot of people through the process.
Couldn't agree more. If .NET Core had been around 5 years ago, I never would have bothered to learn Rails. That's not a knock against Rails (ASP.NET still can't touch their "it just works" asset pipeline), but I've been a C# developer for much longer, and a Linux fan for even longer. So it's awesome that I can finally have my cake and eat it too.
C# has become such a joy for me to work with. The language itself has been progressing at an impressive pace without adding too much superfluous stuff or making it unwieldy. I feel like it's gotten to the point that most of the pain of strong typing is gone without sacrificing any of the benefits.
It'll be on http://engineering.rallyhealth.com/ when it's done (pardon the looks, site is WIP). It was just going to be a long post, but I'm trying to be as helpful/detailed as possible. Probably won't publish anything until I have the whole thing done.
I don't have an exact date unfortunately, but it'll be on there sometime before March 2nd to coincide w/ a .NET Rocks podcast. I'll share on HN though and bump this comment when it's released : )
Web Analytics I feel is years behind data science - but tools like http://snowplowanalytics.com/ are becoming much more widespread and are taking market share away from Google and Adobe which is good for everyone. Free GA is still the best tool for small sites.
Field: embedded software. To me RISC-V is the most exciting thing for the next few years. The performance appears to be awesome, and free CPU IP will allow more varieties of specialized low-cost chips for specific use cases. It should also have a positive effect on development environments by encouraging wider use of free toolchains.
The "minion cores"[1] idea in the LowRISC RISC-V project is a promising idea.
It looks a lot like the PRUs in the Beaglebone black, where you could have normal, non-real time OS (linux) run on the main cores, but delegate real time tasks to the minion cores...and they share memory directly with the main cores.
Basically, you end up with the capabilities of both a Raspberry PI and an Arduino, on a single chip.
I'm a complete outsider, but I've always been curious why people didn't build on OpenSPARC. Is it patent-encumbered, etc? Or, is there some technical reason that makes that architecture less worthwhile than a new one?
I'm also doing embedded work, but I don't really see - or expect - performance from the RISC-V cores above similar CPU designs that consume about the same area/gate-equivalents; did I miss some recent results?
Of course, freely-available and well-supported CPU IP can be very cool!
The SiFive chip is >300MHz on a 180nm process, but the higher end rise-v chips while having competitive perf/clock still have lower clock speeds than the big guys. That is probably just a matter of time though.
That would be an interesting question to ask within other specialized communities and collect the answers in One big Post. Aint nobody got time for that?
In UX an interesting trend is a flood of Software Tools which help during Design, evaluation, Research, etc.
Also adaptive UI which is changed due to user attributes and past behaviour seems to be trendy now (supported by the online marketing field with auto-optimizing Interfaces which optimize for conversion autonomously, etc.)
In my initial comment I mixed up two things into one. Let me clarify.
Adaptive content:
What you see is based on your previous usage of the app/site and not generalized what everyone is looking at. Pretty common...
- Amazon suggestions, Google results, Facebook stream or even your auto-correct suggestions of your phone keyboard.
Adaptive Interfaces:
Where the actual controls, tools, menus change in favor of your usage behaviour, or desired behaviour of "users like you".
(Its not quite clear if this actually helps or harms the UX because the UI could change without the user understanding why an menu item is not available anymore where it used to be)
- I am blank on real-world example "software" here
- But web/landingpage optimization tools like optimizely use predefined rules to change anything on the UI, (like showing a CTA button or a video, hiding a menu, etc.) where others like dynamicyield move into the direction of AI-automating that test-generation and decision making in favor of a single metric (CTR / Conversion / etc.)
In the end you could argue that every real-world application is only using "adaptive content" and not actual "adaptive UI".
The way we educate our kids hasn't changed a lot in centuries. MOOCs are great but completion rate is a real and yet unsolved problem.
I believe the biggest advancement in the field of education is going to come with VR. With VR, we can dramatically reduce the cost of "learning while doing", which should be the only way of learning. With AI, we can provide highly personalised paths for learners.
VR and AI technologies are finally coming to a point where together they can provide a breakthrough in industries which are mostly untouched since decades.
What about the kids who won't put on the VR headset because they prefer to snap-chat, chat, youtube, waste time, do social posturing?
I think, for middle school, it's easy to underestimate how much of education is not actual content. How do you deliver education that targets the teenage anger / passivity / disappointment / and emotional roller coaster?
This is probably my resentment speaking, but I resonate with this Paul Graham essay about school years being miserable primarily due to school, not puberty.
MOOCs have been a godsend to me, allowing me to revise long-dormant knowledge. In the last 4 years, I have done courses in Statistics, Chemistry and JavaScript. It's a buzz to learn things better, the second time around. I completed the courses because I needed the knowledge - I am a chemistry teacher.
>I believe the biggest advancement in the field of education is going to come with VR
I'm 100% with you on this. I've been saying this since VR became mainstream, I'm dying to start something in the e-learning space that takes advantage of VR/Augmented Reality but have no idea where to start.
Here's an odd, tiny, somewhat controversial/dangerous-sounding yet possibly interesting idea I thought of a little while ago that you might like to play with: a road-crossing simulator/trainer (and related concept areas).
My house fronts onto a small but fairly active 4-lane regional/suburban highway which I need to cross whenever I get the bus home, and also sometimes when I leave depending on which direction I'm headed. There are complete traffic breaks every 1-5 minutes or so, and it never gets jammed (there are no traffic lights nearby and it's a long stretch of road), so for a highway it's reasonably tame. My main goal is always trying to take advantage of the "near-breaks" that sometimes happen where the road almost completely clears and I can cross if I'm willing to dodge traffic. I especially try to do this when there's a bus approaching the stop across the road!
I've slowly gained confidence and experience over the past 13 years I've lived where I do (I'm 26 now, FWIW), and I now know when, how and why I can safely begin to cross even when cars are on the still road, so I often don't have to wait for complete breaks. That's been a fairly recent development; my progress hasn't been instantaneous.
I'm at the point where I'm trying to improve my ability to break the road down into lanes and actively track the activity in all the lanes simultaneously, so I can properly "leap-frog" across the road even more quickly. I am (perhaps understandably) not very good at this bit at all: I've found that taking (opposing!) traffic motion across multiple lanes and turning that a precise, realtime and confident/low-doubt go/no-go actually requires a fair bit of neurological development. Problem is, road-crossing has no/few common analogues from other life-skills situations that relate to spatial awareness, gross motor coordination, etc, so it's hard to create and iteratively improve this ability.
The main two reasons for this, I think, is that a) road-crossing is potentially life-threatening, so you want to get it right, and b) (important bit) we all seem to be taught to treat crossing roads as almost as dangerous as jumping out of planes - it's something nerve-wracking that must be done as quickly as possible before any damage (which could happen at any moment) is done. I'm guessing this ideology gets rooted in our heads due to our parents' overarching instincts to protect us from harm at all costs, juxtapositioned with the fact that 99.9% of the population does not have a sound understanding of psychology and an idea of the impact of different presentational styles. (In my own case I was simply taught to be extremely careful, but I only had experience with high-traffic roads after 13, and I had a general fear of roads before that point as I didn't need to cross that many, and when I did I was never alone.)
I think that if we can bootstrap ourselves to the point where we can eliminate the FUD and "helpless prey"/deer-in-headlights mentality surrounding crossing roads, we can begin to actually develop mental models that will likely serve us equally well in many different kinds of split-second situations that involve precise timing.
VR would be a way to get to that point: by creating a virtual environment full of various different types of vehicles and environments and simulating those vehicles bearing down on us (using a highly physically accurate 3D engine), we could actually learn through infinite repetition what 60 miles an hour looks like starting half a mile away, or what 20 miles an hour looks like starting a quarter of a mile away, etc etc. And we could slowly get to the point where we can confidently say things like "I know that I'll just make it across this road before that car does if it doesn't change speed" with much greater accuracy than we currently can. Some users may even begin to accurately guess vehicle speed just by watching the vehicle for a few seconds. It would be kind of fun and awesome to make a VR system where kids can be exposed to these kinds of experiences from a young age as an almost standard thing.
Besides a projector system which wouldn't be nearly as realistic, the only alternative to VR I can think of is repeatedly crossing an actual road all day. That would theoretically work, but there are four risk factors: a) is obvious, the fact that each crossing carries discrete risk; b) the fact that exhaustion from running back and forth would raise the stakes of (a); c) the fact that I'd be trying to be adventurous for the sake of learning which would make things worse; and d) the fact that as I gained experience and skill my risk of complacency would go through the roof due to repeated success.
Point (d) is valid for a simulation, too, but could be combated by constantly mixing up the environments - plain road; road with sharp bends; road with car speeding at 60 miles an hour around sharp turn or behind hill; etc - and maybe weird things like only allowing you to end the game when you failed, etc.
The huge controversy with this (there is a catch) is that young minds would latch onto this new kind of information instantly and turn kids into absolute ninjas capable of crossing complex roads routinely leaving just inches to spare. I see the average retiree driver heart attack rate going through the roof, to say the very least.
Because of this, I sadly don't see a school curriculum supporting something like this, and trying to make a company out of it would quite likely fail too because of the constant stream of negative press it would inevitably attract.
All the ingredients are there - you can repeat as much as you want with no cost, there's the element of competition and winning, and there's nothing stopping you from being adventurous and moving at the absolute last minute. Of course kids (full of energy, no idea what to do with it) are going to game that to the hilt to impress their friends. I have doubts that a game engine would be able to competently prevent that - I'm thinking of a "minimum winning crossing distance" metric, but I'm not sure if that would cover everything.
My crazy argument is to let it happen anyway: _let them_ scrape through the levels with inches to spare - because it might mean someone can save a life one day because they have the confidence to know they'll be able to do it in time. I've seen crazy internet videos of things like people dashing onto train tracks to rescue others at the last moment, and I'm not sure if I'd be able to manage that quickly enough because I'm missing precisely the information I describe here. (These are the related concept areas I mentioned at the start of this post.)
I think something like this would likely be best done as an open source project, in a framework where artists and modelers can easily collaborate and feed back art assets for new environments. The whole thing would need to stand on its own to gain traction, I think.
This is definitely not the kind of thing that looks awesome on paper, although I can see it being a lot of fun to work on, and something where you know you'd be teaching some really cool and liberating skills.
FWIW, I have absolutely no hope of getting my hands on any VR hardware anytime soon - due to circumstances entirely outside my control I've been stuck on hand-me-down PCs that average 10 years old for the past 2 decades - so I just thought I'd share it in case you (or anyone else) wants to play with it.
To clarify, the centerpoint of what I was describing above was that VR would provide the ability to repeatedly watch a car approach from a distance and learn what speed it was going at at the same time. If I had that I could do a lot of things.
I've noticed that the quality of conversation on VR has gotten a lot better. Used to be you'd go to a meetup and all you could get out of anyone was either parroting some urban myths about the porn industry driving technological change or looking for tech support on getting Unity set up. People are now asking themselves some really hard questions, like how do we design applications that adapt to both VR and non-VR use (there is an argument to be made that you can't meaningfully do so, but there is another argument to be made that you shouldn't stop your users from trying, as they tend to surprise you), or maybe the game development industry isn't the best model to emulate.
The tech is great and should be developed. But I worry about faking evidence, in court as well as in the news. It will get to the point where you can put any words in anyone's mouth, and seeing will be believing.
Perhaps it will be detectable, with technical effort, for some time to come, but as a propaganda and government corruption tool it will complete the circle started by the "telescreen/ankle bracelets" we all carry in our pockets.
The facial expressions were pretty mind blowing. I sometimes think with VR and graphics getting so good, will some gamers actually spend more time in virtual worlds than the real world
In 15 years maybe the bandwidth will be there, and we can have full-face VR headsets that also face track and transmit expressions to other people in virtual worlds.
The allure is there - be convincingly you, but also look however you want to look. That would get eaten up by a lot of people.
Yet the teeth/tongue were obviously wrong, and the hands' motion felt... off. The Uncanny Valley effect is difficult to overcome, while we're getting closer it's not until things are perfect that the improvement really matters.
That in the past people used to deceive (delude) themselves and other fools around them with theology, speculations and metaphysics, today they do the same with statistics, probability and abstract models.
Nassim Taleb reader/Twitter follower? In my (limited and mostly anecdotal) experience this sort of categorical seems to lead to more, perhaps different kinds of superstition. People read "The Black Swan" and start talking like statistics, probabilities etc. were suddenly completely meaningless. Humans seem to always have to live in extremes.. ;)
It's all a matter of perspective. One could also claim that those who don't acknowledge God are deluding themselves with a lack of theology. I'm not trying to start a debate, but dismissing the beliefs of billions as simply delusion/deception is painting with broad strokes.
Why, evolutionary psychology and the philosophy of mind, comparative history and some sociology offers a comprehensive explanation of the mental and social forces, so to speak, which makes religions possible. It is social phenomena of language-possessing species, if you wish, misinterpretation of the instincts, co-evolution and other causes and laws, to which humans are subjected, like any other species.
This is where I interject & mention this.
https://en.wikipedia.org/wiki/Opportunity_cost
People don't want to feel like they wasted time & resources over something that doesn't exist.
Religion can just be relegated to a construct for social activities. There's so many of them which sprung up independently. People should really ponder whether they have accounted some belief for interplanetary travel.
If you bring up opportunity cost, I think it's also worth mentioning Pascal's wager[0]. I don't believe in this argument, but it's an interesting point to consider.
Which god? The argument you make could also be made for all people following all the religions aside from the one whose god actually exists. The religions can't all be right, if in fact even one is right.
So sheer quantity of believers doesn't work for making a point.
I have used an argument like this and the answer I got(which left me dumbfounded) was like "Everyone believes in some sort of divine power or entity or whatever and the other religions just got their wrong."
They're basically saying that sheer quantity of believers in anything proves that their God exists!
The statement as quoted does not make the argument you claim it does. Perhaps the person who were speaking with elaborated in order to make that point though.
Coming from a Christian perspective however, I would agree in general people have evidence to believe in God. I don't intend the quote from the Bible below to serve as an sort of evidence. This would not be a logical line of reasoning for someone who does not believe in the truth of the Bible. However, it may serve to further clarify my position.
"For since the creation of the world God’s invisible qualities—his eternal power and divine nature—have been clearly seen, being understood from what has been made, so that people are without excuse." -- Romans 1:20
Fair point although I wasn't intending to argue that number of believers is any sort of proof. In any case, at least Christianity and Islam each have over one billion believers globally.
> I wasn't intending to argue that number of believers is any sort of proof.
It definitely doesn't provide any proof, but I didn't say that. I said "sheer quantity of believers doesn't work for making a point", as in, how can the number of people in any way support the validity of any beliefs held by those people?
There's plenty of concrete historical evidence that beliefs held by many many people turned out to be wrong. It's basically the story of science. I.e. we have clear evidence that the fact that a lot of people hold a belief does not mean anything about the truth of that belief.
> In any case, at least Christianity and Islam each have over one billion believers globally.
Not sure where you are from, many (I am from NL and not many believers in the first category anymore; I live in the south of Spain and there not many either, but from what I read in the US there are plenty left, to my dismay but who cares about that) are still deceiving themselves with theology. But yeah mean that both are beliefs and there it does not matter? Or?
This seems incredibly cynical in light of the current breakthroughs in machine learning, and probabilistic modelling. It really feels to me as though the AI revolution isn't something to dream about anymore, because we're living in it.
I find it difficult to deny the achievement of tangible progress that is implied by, for example, the self-driving car.
UNINFORMED OPINION by a side-watcher: I don't feel there's any AI revolution at all. We simply have slightly more efficient deep learning networks due to better hardware.
Most of the time it feels like the people who are somewhat successful in those branches simply got lucky by randomly mixing elements A, B and C in an unexpected manner and boom -- magic.
In other words, things progress painfully slow and almost always it's due to intuitive shower/sleep revelations than anything else.
The algorithms are improved, and there are more network types, and better understanding of real neurons, but I think you are mostly correct.
I built a program that played tic-tac-toe in -94, using a combination of traversing a problem tree while evaluating the positions using a neural network.
To my understanding, this is basically the same approach used to develop a Go player, only that it took a month training a minimal network to do something useful at all at the time on my small Amiga 500...
Modeling is a new form of pseudo-scientific speculation. ML is closer to reality as long as the training sets are not a dogmatically interpreted noise, like it usually is in finance and other pseudo-sciences.
You could read about a fundamental difference between a properly controlled, replicable scientific experiment and computer simulation according to some abstract/unverified model and why results of such simulations cannot be substituted for experimental results or any form of evidence in my older comments.
In programming in companies: realization that internal customers not having choice of internal IT providers hurts IT because it reduces IT's need to deliver valuable solutions effectively.
In leadership: management structure is a framework to enforce standardization and generally doesn't adapt well to change, even with the latest management silver bullets (lean, Agile, flat-orgs, etc)
Also in leadership: profound changes are occuring in society and geographies no longer define cultures.
In commercial writing: it's still early, and this takes time, but the concept of the "book" and how it's created is changing. Technologies that allow writers, editors, and beta readers to work on the manuscript simultaneously are increasing the velocity of change.
In art in general: someone else here mentioned music creation and payment is enabling entrants to sustain themselves in niche markets. This is happening in nearly all art forms, not just music. As electronic transfer fidelity increases, more art can be digitized, monetized. Look for more politicized, more global-reach art.
All these things stem from a greater understanding of the world and of human beings, starting with ourselves. It's important to realize each human being is a highly complex system and that generalizations about groups of humans are increasingly being challenged as scientifically unsound.
As a writer, tech and the globalization of English are enabling the hiterto impossible. Still not clearly seen but glimpsed as shadows behind screens, they either scare the timid or thrill the brave. They are coming.
In particular, wireless transmitters for roomscale are really exciting - seriously, I cannot wait to get rid of the wire-to-head era - as is roomscale for mobile devices.
The Vive getting additional trackers is also super-cool, as that will enable some much better forms of locomotion through foot-tracking. It'll take a little while to take off but I expect the Lighthouse tracking ecosystem to produce all kinds of cool things.
(Not all in VR, either. Drones plus Lighthouse, for example...)
My field of interest is censorship resistant systems. Systems like ZeroNet[1] are quite fascinating and are quickly becoming popular and used. Essentially they're decentralized via the bittorrent network. One cool thing that it brings to the table is the idea of having users modify a website (similar to how your comment modifies this page) - which is a hard problem in a decentralized system. They have come up with an interesting way for users to do this using trusted third-party certifying systems which are still totally decentralized (because users can switch to others when they see fit).
I helped with it for a little while, but the main developer was resistant to:
* Using a package manager or bundling dependencies into a compressed form. Dependencies had to be in the same git repo, fully extracted. (A bit of a "code smell")
* Dependencies could take months to get security updates.
* Documentation couldn't be in the git repo.
* Python 3 was "not an option"
Also:
* The main developer has limited experience with the torrent protocol.
It is an interesting project: but it is not a private or secure one.
CubeSats and small satellites are changing the game for spacecraft. Now scientists can get experiments launched for a few million dollars instead of campaigning much of their career for a mission costing hundreds of millions.
I work in remote sensing and we do e.g. segmentation of satellite imagery. There are two exciting developments: First, lots of vector data (think building footprints, road networks, etc.) and (satellite) raster data (e.g. Sentinel-2) is now available for free, secondly image segmentation using CNNs works just extremely well. Therefore there are many opportunities to build all kinds of software, in particular CNN based classifiers and distributed systems to handle the immense load of new data.
So I can highly recommend the field of remote sensing as there are many interesting problems to solve.
There is currently no established hub, that aggregates them, but public agencies often offer them on some open-data initiatives on a national level, e.g. Austria has https://www.data.gv.at/ and searching for "Gebäude" (building in German) lets you find e.g. the data set https://www.data.gv.at/katalog/dataset/ac74b38e-57cd-4c8c-8f... where you can download building footprints for the state of Tyrol.
The same thing is also done on an international level, e.g. the European Union provides platforms and also the environmental agencies in the US.
# edit:
Clarification: These footprints are not satellite-derived (that is the goal, but it doesn't work well enough for many applications, but we probably will get there ...), but are hand-crafted by people working in city planning. The point is you can use the data as training data.
Enterprise Architecture. What is often an unmanageable bundle of "models", pictures, documentation (UML etc, or tools or “repositories”) giving way to concise and precise schema for architecture decision making – a pleasant outcome of the informal global teamwork surrounding meta-models in DoDAF, simplifying EA activity to a level that has not been anticipated ...
UML - I know next to nothing about UML - but what I do know is the language was invented first and then people came around and tried to give semantics to the language. Well, in other words what that means is that the language was invented first and it really didn't mean anything. And then, later on, people came around to try to figure out what it meant. Well, that's not the way to design a specification language. The importance of a specification language is to specify something precisely, and therefore what you write - the specification you write - has to have a precise, rigorous meaning. - Leslie Lamport
UML as a specification language is the right tool in software architecture. I find it to be very flexible tool, helping software projects or process models. However _Enterprise_ architecture need to work with the "Business" (consider COSO, COBIT, ITIL, and why they emerged when UML foundations were already so strong).
Flexible solar panels, LED lighting with open source drivers, and the new generation of DC refrigerators are all incredibly exciting and are allowing us to experiment with living without grid electricity.
It is much more efficient than DC fridges of even just a few years ago. It has configurable settings to respond to battery levels and can be configured via wifi.
Nvidia, Broadcom, ARM, Mediatek, Samsung... I don't imply they all do perfect job or open-source everything they sell, but there are noticeable amounts of code they put into mainline kernel.
SideFX Houdini 16 is coming out [1], the new version of the most awesome software for 3D VFX and animation. Super excited about this, it's gonna be awesome!
Also, I'm really looking forward to the ActivityPub [2] implementation, that'll do a lot of interesting things for decentralized web.
Looking forward to this too. I'm going to the launch party for Houdini 16 here in East London tonight. Make sure to check out Fabric Engine too, especially if you're doing any realtime VR/AR stuff.
Overall, I'm most excited about VR/AR/MR in relation to storytelling and education and how the two can be combined. Houdini and Houdini Engine for UE4 are definitely are worth a considering as part of your VR/AR development stack.
In general, this is a question that I would ask interviewees (for any position). And answer other than shock shows that they are keeping abreast in their field.
Then I would fail. Not because I don't follow tech news, but because I feel what is being created now is stuff we should have had decades ago (what company is even working on flying cars?).
I am so not looking forward to flying cars, at least not for everyday use by everyday people. I've recently been having dreams of people cutting me off in traffic. Probably because a couple weeks ago someone did just that to me, trying to beat a red light by speeding up and changing lanes to occupy the space that I was in at that time; my reptilian brain and peripheral vision were the only thing that stopped a collision.
I really don't want people doing that over my apartment or favorite urban trail.
That's not a bad answer though - it demonstrates you understand the value of those things we should have had years ago, and might include things which aren't being worked on yet (even though they should be).
The size of embedded electronics we have now. Makes me very excited about the near future. As a hobby I am excited by the advances in programming language development; most seemingly tiny and incremental but a lot of long term research is getting working implementations and that is brilliant. Another hobby is the robust push for timer perfect emulators of more and more older systems. But more than anything VR excites me; it is not 'my field' per se (I plod around clumsily with little demos) but it will be in the future. And it will never end.
Edit: there is a lot to be excited about these days
Sqlserver coming to Linux via docker containers. It's insane and exciting. We are a sole MS shop and this is exciting because I'm pushing to move us away from Windows and onto Linux if possible the kicker being we are dedicated to SQlServer so exciting times ahead. Hopefully Ms doesn't gimp SQLserver on Linux.
I'm really pumped about this open source tool project I've started which promises to join Lean Startup/Hypothesis-Driven Development and DevOps. Enter everything only once, have it available wherever it's needed.
Analysis has always been an area that the tech community has lacked, ever since it was overdone back in the days of structured programming. It's really cool to bring back a bit of structured analysis as just another tool in the DevOps pipeline and join up the information with all the folks that need it.
Still working on the elevator pitch. Unfortunately it's not as obvious as something like "Facebook for cats!" (Although I think it will be much more useful)
The general idea is to be able to have informal, unstructured business conversations, take those conversations and type extremely brief, semi-structured (tagged) notes, and have those notes "compile" out to various places throughout the organization where they might be needed. One way to think of it is Requirements/Use Cases/User Stories without the rigor. (Or rather without the rigor and onerous BS folks constantly seem to be always adding around them)
Here's the repo. There's also a PDF with details of the tagging language I can send if you're interested. Ping me.
Long reads from PacBio, Oxford Nanopore, and 10x are also exciting. This new tech coupled with HiC for scaffolding and single cell sequecing brings up the possibility of complete knowledge of your genome, the collection of strains in a metagenome, or all the types of cells in a tissue/cancer sample.
"field" would be a strong word as it's more of a diy hobby thing, but in the world of FPV drones I'm excited for flight controllers with integrated 4-in-1 ESCs (electronic speed controllers). Wouldn't say it changed the game but makes it so much easier to build these quadcopter and opens up new possibilities.
The growing class consciousness is the most exciting, as well as scariest. We can build a non-profit driven world (socialism) - or - hate driven world (fascism). Reading various texts starting with Karl Marx's Das Kapital is probably the most important learning a person can have at present.
NewHope and NewHope-Simple Ring-LWE key exchange systems. Post-quantum secure key exchange with performance (speed/key size) that's actually practical! There's not much point to having a secure cryptosystem if it's so expensive you can't use it.
Deep learning is a game changer for image processing (that should be fairly obvious to anyone reading HN). It still requires a lot of expertise to use, but it's enabling people to do things that were previously extremely difficult or even impossible to achieve.
If you want something a bit more user-friendly than tensorflow to get started, I suggest looking at Keras. It basically a higher-level framework built over Tensorflow (and Theano).
I'm an accountant working on financial reporting, and I am very excited about ways to implement automation into financial reporting processes. Only just now are people using excel proficiently, I can't wait to see what the next big step is.
Long story short, so many processes I work with are done completely manually, which is a colossal waste of time. When I started, the person who previously did my job had about 7 main processes they completed monthly, which took about 60 hours to complete. Those 7 processes take my about 10 hours to do after I built automated workbooks
The sad thing is that these excel capabilities have been around forever, but no one understands them.
As someone who works in the semiconductor industry, one of the most exciting things happening right now is the development and emergence of persistent/storage class memory (PCM/RRAM/3DXP/NVDIMMs). The implications of a persistent alternative to DRAM is immense and besides fundamentally changing compute/memory/networking/storage architectures it will also change programming models and SW stacks as we know them today. This is a topic I feel doesn't get enough visibility here, especially given that support for such technologies has already started getting baked in to Linux and Windows.
Embedded systems - Wireless Sensors Networks, I know it has been there for long time but IoT would encourage it more. It could enable development of different kinds of devices as well. Look at camera industry for example. There should be more types of sensor to be more popular than just the image sensor. Quadcopter/Drone/AI etc.
In my view, there are still a huge room of applications where wireless and sensor combined, and we already have web/native platforms. This is so exciting development!
>> Look at camera industry for example. There should be more types of sensor to be more popular than just the image sensor.
Could you please elaborate on this. I don't understand how the camera industry could go beyond image sensors. They wouldn't be the camera industry anymore if they did that.
I was thinking of a group or cluster of quadcopter flying together.
There are some nice protocols and topologies in Wireless Sensor Network topic. While devices' sensors collecting environment data they can communicate with each other in several ways (e.g. ad-hoc, hierarchy) and command each one to behave in different ways (e.g. quadcopters maneuver in whatever beneficial pattern)
Some more ideas on that flying object example, it could calculate overall battery usage and balance it all over cluster by wireless charging on the fly.
Underwater devices or robots would be more interesting.
I am a physicist in biology (so take these with a grain of salt) and CRISPR is arguably the most exciting development there. This technique allows to edit DNA based on guiding RNAs which can be readily synthesized in contrast to DNA binding proteins required for targeted editing before. What's more the same technology can be used to adjust gene regulation too. These techniques are not only giving basic research a big boost, but also making many new treatments possible.
Another hot topic are organoid bodies and organs-on-a-chip. These are experimental systems where stem cells are turned to grow into structures similar to embryos or organs that allow the study of development and facilitate drug testing etc.
Thirdly, advances in sequencing made it possible to study what kind of bacteria live symbiotically within and on us. The composition of this so called microbiome seems to widely affect body and mind.
Finally, in my personal field, the simulation of how "simple" cells build complex structures and solve difficult tasks, the most exciting development is GPGPU :-)
I also thought about it, but I do not see people flocking to synthetic biology. I have the feeling they are further from big breakthroughs, but this might just be my environment ...
I'm a web developer. We've picked up Microsoft Orleans for a large scale data analysis platform we're building. Realizing the power of an actor model on a mature platform like .NET has been a real treat. So many nasty problems go away: threading, messaging queues, job queues, caching, general scaling.
I can't speak for everyone in my field (chemical manufacture and catalyst development), but I wrote about some of what I think are the current coolest new developments in chemistry and materials science as it relates to machine learning. [1]
In summary, the use of machine learning can help us develop better representations of chemical reactions, catalyst behavior, and we can now use adaptive learning to create closed-loop systems to identify, carry out, and optimize chemical processes to reduce environmental impact, reduce energy usage, and decrease costs.
The state of the art isn't quite there, but I see no major conceptual barriers left -- just a matter of implementing it.
Advertising. Definitely first party data for targeting. An advertiser takes some data from its CRM, sends it to the big social sites and google, and then uses the list to target those folks specifically or create look-a-likes. Actual cross device targeting (because people are logged in), extremely personalized and relevant.
That's advertisers themselves doing a bad job of retargeting. First-party data is probably not going to change that. In fact, it may mean you'll just get those ads across all your devices.
Predictive analysis done via Learning Management Systems (LMS) to identify students at risk of dropping courses at Universities. Student retention is a big topic now because it really impacts directly on school's revenue stream and financial health. The big hope is with AI to be able to track how students interact with peers, with teachers and with instructional content, and then cluster students by their evasion risk.
Not my field but 3D reconstruction and vision to scene graph is amazing. From a single camera, being able to create a video game version of it opens a ton of possibilities. I predict a video game version of all the roads, lakes, buildings of our real world.
This will change real estate websites as well. I can just query for houses with X visual features
Another javascript framework. No, seriously, my field is starting to unravel the secrets of artificial intelligence. A lot of ethic conundrums are going to be started by those advancements.
I am a biomedical researcher working in field of genomics. I spend majority of my time curating literature. I think it could be automated. We need an AI algorithm for this area.
for networking, imho, it would be a combination of sdn+dpdk making it feasible to use vanilla x86 boxes for a wide variety of tasks, where you would have earlier required 'purpose built' silicon etc.
Most annoying is the need to write in ES6/Babel today, and all these js hipsters that really believe this is the future of web development. I totally hate Babel with it's dozens of Webpack patches/plugins to make it work. Oh, and don't forget your (Airbnb style) linter if you want to be politically correct.
No one needs Babel to write stellar code IMHO. Unfortunately it is not about the quality of the code you write, it is about being politically correct. This whole ES6/ES7 thing is much based on what Coffeescript, Livescript, etc.. already did much better more than 5 years ago. And I dare to guess that most of the Babel proponents don't even realise it's just a transpiler that they will need till the end of the projects live.
note: I expect serious down votes as opposing Babel is almost a serious crime nowadays and proves my unlimited stupidity.
No, web development is not really exiting nowadays, it is more terrifying, where the hype goes tomorrow? Maybe soon I will be forced to write in MS Typescript if I want to be taken seriously. Same counts for Redux because Flux is so 2014.. you must be very brave not using Redux! I can go on and on, way too many examples..
Finding a web developer job now is particularly about complying to made up standards that become more complex every day. And I've seen quit some horrible code bases that perfectly comply! It's a very sad reality.
Let's establish a couple things right off the start:
1. You can write great applications without the latest language features
2. The latest language features do make development easier
Babel is necessary for #2 if you don't have control over the browser which your users use to access your site. If you don't want to transpile, don't. It's as simple as that. However, the future of JS is the future of web development, that is indisputable. Using Babel allows you to stay closer to the future and/or use these great new language features.
You also brought up TypeScript.
3. Types make development much easier
TypeScript is a combination of types and a transpiler for the ability to use the latest ES features. Types are great, providing:
- Better self-documenting code
- More safety
- IDE interop to provide completion, as seen in VS Code
> note: I expect serious down votes as opposing Babel is almost a serious crime nowadays and proves my unlimited stupidity.
From the HN Guidelines: "Please don't bait other users by inviting them to downvote you or proclaim that you expect to get downvoted."
> Maybe soon I will be forced to write in MS Typescript if I want to be taken seriously.
Many would say that someone should be forced to write in a typed language in general in order to be taken seriously.
typical.. I'm very glad you're not the person 'establishing things' in our team.
> If you don't want to transpile, don't. It's as simple as that.
Are you kidding? Please tell me your estimation of how many developers write in ES6/ES7 without using Babel or other transpiler???
You don't really need to tell me what Types are about, I have a long standing C/C++ background. And I really don't need Typescript. I use dynamic type checking based on ES3, does the job flawlessly already for years. Very rare for me to have a type related bug. I'm always wary of people that preach Typescript; what code do they write to get in so much trouble with types?
> Many would say that someone should be forced to write in a typed language in general in order to be taken seriously.
omg.. 'forced', this is bad.
I'm only looking forward to webassembly, that will be the real game changer and the end of JS as we know it.
I think you make some very good points. Focusing on a particular toolchain because it's popular rather than the benefits it provides in development and the result is misguided. When developers first start out there are a lot of choices to make, and they may not have the foundation or experience to make the decision they would likely make a few years down the road. Often you're happy when you get something working and repeatedly so.
In the JavaScript community there's been an explosion of tools, tool chains, libraries, module systems; without a lot of experience, it's very easy to be overwhelmed. Criteria for choice can include the availability of easy-to-understand tutorials as much as understanding the motivations and design decisions and limitations of the tools themselves. Some people are very good at writing blog posts describing what they've done, better than conveying the underlying concepts. I don't want to necessarily fault them for that, as I think there's often a sincere desire to show people a way that works for them.
Rather than focussing solely on what's wrong, it's very important to also include options you think are better, and why. Looking down your nose at others, describing them as js hipsters using politically correct tools does no one any good, and makes it even less likely people will listen to what you have to say. And on HN, expressing the expectation of down votes is a guaranteed method of receiving them; it's against the guidelines and adds nothing to your comment.
There are a lot of choices in the web development space today. The desire to standardize on something (such as the push for Babel and Webpack) is laudable in that they recognize that so much choice is not necessarily good: it makes it more difficult to decide what to use (sometimes good enough is just that), and splits resources that may otherwise be used to improve a more limited number of options. That's not to say Babel and Webpack are the best options: just that I understand the motivation for standardization and push to popularize a few (rather than all) options.
I agree there's a lot of cargo-culting and bandwagon-ism nowadays. It's the result of there being lots of beginners who just want a recipe they can apply to produce a certain result. I don't blame them for wanting that or for being beginners, or for being frustrated at the obvious madness of it all. What bothers me is the apparent lack of interest these people seem to have in understanding the history and trade-offs that got us here.
I'd love a factual and accurate report of that history and those trade-offs, even a book-length one on paper. Do you know of one? I'm not trying to do a dissertation on the subject, and it's not my day job, so weaving my own unreliable history out of Google searches and cranky blog posts is not ideal.
The only thing I can think of is those sleep timer apps, I think the main reason they've not took off is because they require you to put your phone under your sheet every night which you can forget to do and is a bit of a pain. They're also completely thrown off if someone else is in bed with you.
There have been a few quickstarters which claim to reduce the amount of sleep you need but they've all turned out to be nonsense AFAIK.
I agree it's weird that we have nothing when we spend 1/3rd of our lives asleep.
HackerNews is very developer-focused. If you guys saw what a radiologist does on a 9-5 basis you'd be amazed it hasn't already been automated. Sitting behind a computer, looking at images and writing a note takes up 90% of a radiologist's time. There are innumerable tools to help radiologists read more images in less time: Dictation software, pre-filled templates, IDE-like editors with hotkeys for navigating reports, etc. There are even programs that automate the order in which images are presented so a radiologist can read high-complexity cases early, and burn through low-complexity ones later on.
What's even more striking is that the field of radiology is standardized, in stark contrast to the EMR world. All images are stored on PACS which communicate using DICOM and HL7. The challenges to full-automation are gaining access to data, training effective models, and, most importantly, driving user adoption. If case volumes continue to rise, radiologists will be more than happy to automate additional steps of their workflow.
Edit: A lot of push back from radiologist is in regards to the feasibility of automated reads, as these have been preached for years with few coming to fruition. I like to point out that the deep learning renaissance in computer vision started in 2012 with AlexNet; this stuff is very new, more effective, and quite different than previous models.