Hacker News new | past | comments | ask | show | jobs | submit login

I'm entirely unfamiliar with Ruby so although this may seem like a swipe at your language choice it isn't, I am confused about the banks not wanting to use CICS and extend to the web using something like Liberty server profiles that launch java threads that understand CICS and therefore any custom error codes specific to the transaction system. I'm guessing that you needed to catch errors that weren't native to your code and unique to the respective bank clients. I'm further guessing that because CICS provides a comprehensive custom error handling protocol and services, and CICS is everywhere in financial transactions and nobody wants to be responsible for the fate of any lost ts error messages, this is why you could patch to handle the errors in this way without any problems with your code?

Since IBM went all services and consulting and practically only mainframe remained of the old IBM, and whilst x64 Etal went into architectural Cambrian Explosion, at the turn of the century, IBM Z division has become a entirely different animal. The just discussed at Hot Chips 33 Telum processor with CISC instructions for memory architecture networking and in pipeline RNNs is so out there that I have started to try and figure out how much it takes to get to be a IBM partner just to get access enough to write about this technology freak. If the team exists the finance is slam dunk link with a infrastructure fund. This mainframe AI can be trained with regular tools and functions with really low latency - it's on the die. Just don't be ageist and train vets with clearances (for LOB or just because it's the right thing to do - my partner founded a headhunter firm for veterans and placed the vast majority in IBs after 6 months of his programme - and age no issue to CEO signed undertakings, my partner really hit it out the park with his programme quality, quant covered from fp math to martingales but it had to shut down because the people guy who kept everything from rabbit holing to fulfil my partners fantasy hedge fund ambitions died of cancer and a tech partner is necessary to free me to pick up the relationships which are as open as the day my partner kicked those doors off their hinges with some outrageous leverage of 80s mainframe knowledge that pumped the flow desk of the financial infrastructure organisation who's CEO didn't blink when my partner made the FT trampling over every unwritten rule of headhunting that he never knew existed like everything else beyond his screens. This guy has just shy of 50 years success behaving like this what am I to know? I am still recovering from the unending cardiac arrests I had for no reason until I realised that some relationships really do transcend everything. The UK incidentally has a enormous pool of senior management talent created by the government firing all civil servants who qualified vocationally instead of university. Everybody was summarily dismissed. Just before the 1997 employment act was introduced. I have worked with the director of the largest aide agency in budgetary terms after the Gates foundation whose office was a dozen or so people. It now needs 600 apparently. The logic was partner agencies didn't manage fraud well enough for the British government liking. Plenty better than we do without any relationships with the same leaky institutions that now take pride in embarrassing the Brits with implicit sentimental support from the agencies we cut off.

My point is not a self promotion but the promotion of a architecture hardly anyone gets near enough to speak about for comparison with the front line of web development. This is a unmitigated loss for everyone all around.

All that boastfull background is intended to directly criticise the nature of this technology discussed here that's controversial enough to elicit numerous warnings in the comments immediately after publication.

Given the search rank of HN and the difficulty with finding human readable (non expert) commentary on highly technical issues like monkeypatching I don't see how the existence of this discussion doesn't de facto eliminate the techniques and even potentially the technology as a serious option for critical systems deployment.

Consider the weight a browsing executive will place on what's said here.

In the example just given the application is applying node specific handling changes to in progress production code in a webserver talking to banks running transaction systems.

Consistency of the www service and presumably the user interface critical to customers and likely to be the only way to access account information (giving up our company x.500 terminal probably not even after corporate death because companies have been bought from administration only for grandfathered services often enough I considered trading them for a living) despite how much people will pay for the capability.

Monkeypatching tells me that the errors aren't in your code because why introduce the trade-offs and risks of the technique for blowing up all of your most impressive customer references at the same time? I'm guessing hard nothing much has to be done to the error messages.

But it's so hard to blow up CICS (assuming you read the docs) potentially even lacking in-depth mainframe knowledge the opportunity to launch lightweight java environments and FFI to your Ruby runtime or make the call via CICS from a SPI error manager in javascript that because CICS you know is giving you actual state, or creating a message feed for individual subscription to have a extra confirmation that you didn't lose their messages, and running this under the banks sysadmins purview whilst giving you the same certainty your updates applied with point in time rollback to exact system clock and transaction stamp and in flight transaction state and zero sweat about any potential consequences looks pretty attractive to where I am seeing this.

If I have totally misunderstood this I'd like to beg a point for the scenario that I have described : namely one where you are responsible for the ephemeral bleeding edge of web UX and (in my hackneyed eyes) making a stateless best effort ad hoc text delivery protocol statefull reliable and infinitely scalable also a GUI thread manager and connections manager and anything else I omitted to boot, behind which is a great hulking mainframe nobody wants anyone to touch because the lack of adoption can't overcome human fear of being responsible for borking a IBM Z no matter how indoctrinated everyone is that Z can't fault assumptions of complacency obliterate reason and so you're hoodwinked (joined that club long ago) into not looking up how you can run webserver tech on CICS keeping it on the mainframe and getting the benefit of all that reliability and fault tolerance and data processing invincibility for your own personal gain.

Putting the code on CICS is slam dunk argument because how can anyone beat the mainframe management magnificence mantra unless there's some major issues with the big iron priesthood. Your code automatically gets the same management and reliability capabilities and gives sysops single pane of glass sight of everything happening with the front end interactions opportunity to get in on the new wave potentially for them personally and a nice brag of relevance in the face of the bleeding edge and unstructured world we represent, meanwhile the web development team just bagged major C Suite kudos and the ability to coopt some of that incredibly valuable institutional invincibility inventive that can transform your business model from job quotation to Senior Web Scale Z/OS Customer Systems Integrity Consultant times head count and paid pitch fees, retainer and expenses. Plus probably the possibility of getting insurance cover for DR scenarios and hourly overtime on any incidents paid for by customer DR budgets and policies instead of being the wrong side of pointing fingers and the one thing that C Suite types sympathise with is web UX and general web reliability grief.

I surely deserve criticism about my style with this, but what's going on with the mainframe about now is really wild. My pitches above were hypothetical to illustrate the point although my partner and good buddy is as much work as that non fiction description and a leading dinosaur in my eyes. I absolutely accept that the chances of mainframe ubiquity are not meaningful numbers. But with everyone busy making the OS as irrelevant as possible, how long does it take for the mainframe Telum cpu to be viable mainstream silicon? Very strangely I think I am going to live to see that move being made. Maybe the IBM strategy is the right one and only the attitude of management the problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: