I'm going to express something a lot of people are thinking and are being far too diplomatic about.
React Hooks are a fucking stupid idea and always were.
They're basically just adding dynamic scoping to a language and framework which doesn't need it, in one of the most 'magical' and confusing ways possible. You have to care about execution order to understand exactly how they'll all work and that will bite you eventually if you're building anything without knowledge of all the contexts in which it's called.
There's a reason that most languages stick to lexical scoping: you can see the dependencies, in the same file.
And a large portion of the value of functional languages is that they avoid state, making magic-at-a-distance impossible.
Boilerplate is not the problem. Magic is the problem.
The 'magic' involved in hooks is a tradeoff; there are real benefits in the way you can consolidate logic, or mix in behaviors. Personally, I strongly prefer hooks to HOCs.
Many technologies have magical behaviors and are still very popular and useful (Rails comes to mind). I'm really liking the pros and cons being brought up in the rest of this thread.
To me this is the most visible win. useSelector for Redux, useIntl for react-intl, useHistory for react-router, useStyles for material-ui, etc. Almost every library I use radically simplified their API by adopting hooks.
It also makes types much easier to analyze (whether using, say, VSCode's inference or Typescript) when using hooks. With HOCs, types tended to get lost through the arbitrary amount of <Wrapped {...props} /> chains.
This. So much this. You are 100% correct. Hooks are incredibly stupid. No, your component is not "functional" because you don't use the word "this". You still have a "this", it's just fucking secret now so your debugging is harder. I could go on about all the other reasons hooks are stupid, but JavaScript is largely a cargo cult and I'm a nobody so I'd just be wasting my breath.
I honestly don't know how hooks work that well but I find them easier in general to make quick reusable stuff or just plug things in without having to worry about layers deep of Higher order components. There used to be class = logic , pure function = takes data and outputs jsx. But now functional components manage their own state and somehow trigger rerenders of themselves (how do they do this btw?). So they don't really seem to be 'functional' in the functional programming sense, but more the 'we use the function syntax of JS' sense. I don't know haha.
> don't really seem to be 'functional' in the functional programming sense, but more the 'we use the function syntax of JS' sense
That's right. A true function has referential transparency. I get that hooks have ergonomic benefits in some situations, but I wish people wouldn't call them "functional".
How objects work: there's a lookup table for methods and properties associated with your instance to find them by name (or call signature, or whatever).
How hooks work: there's a lookup array associated with your instance (yes, an instance of an object—read the code if you're skeptical, and besides functions are objects in JS anyway so even if I'm wrong, which I'm not, I'm technically right) to find properties and methods by reference order(!?!)
Hooks are just a crippled implementation of objects with weird syntax. In a language that already has non-crippled ones built in with less-weird syntax.
It's storing all the hooked-in functions (methods) and variables (properties) outside the function and associating them with the relevant React view object instance at run-time, which is effectively the hooks' "this". Unless the code's change substantially and in very fundamental ways since it was introduced. It doesn't bring to mind closures, at least as I read it. It very much brings to mind object/class-system implementations.
Hooks are an elegant & clever idea but they can be difficult to use in practice. You really need to understand in detail how closures work. Manually managing your dependency graph and memoizing in all the right places is harder than the old class-based model in my experience.
I've really enjoyed working with React but it seems to me like some of the newer frameworks like Svelte have taken the best ideas from React without the baggage.
> Manually managing your dependency graph and memoizing in all the right places
I wouldn't say it's harder, but it's certainly not simple. There are a handful of mistakes that I see repeated, but if you get over those hurdles, you can significantly simplify your components 99% of the time. It was very easy to have huge componentDidMount and componentDidUpdate methods in class components, and with logic scatter shot across a big file without the ability to easily reuse bits of it.
I converted a medium-sized React codebase from classes to hooks. In most cases it simplified the components and eliminated boilerplate. But it also introduced more than a few very tricky bugs and serious performance regressions that were not trivial to fix.
React team made the wrong separation of problems with hooks.
Class component should lose its ability to render and replace it with attach functional renderer. In its place, class component should have composable and detachable state and substates with their own lifecycle, each communicating via events within the same context.
It will be truer to `ui = fn(state)` principle.
This is a result of contemplation after learning what the functional people and rust community are doing, and then coming back in front of my laptop showing my professional project in React and TypeScript.
It took me months to experiment and reach the decision which finally helps the team to write and iterate faster. I hope this will help everyone facing those React problems.
The knowledge-gap of closures is simply an indication that you need to solidify your Javascript foundation prior to understanding hooks. I only see this as a benefit.
well, from my point of view i write 40% the amount of code with react hooks than i did with react classes, i probably reused about 40% more code, and can write components 50% faster than before. i also refer to React documentation about half as much as before.
not sure what's 'fucking stupid' about that.
it might be harder to grok at first - but that's the reality of tools - by nature, they get more complex but more elegant, i think it's fucking stupid to want to go back to componentDidMount()componentDidUpdate, componentWillReceiveProps, componentWillUnmount, getDerivedStateFromProps and UNSAFE_componentWillUpdate. like... really?
I never understood what was wrong with class components anyways. What did Hooks bring that couldn't be done in an easier to understand way with class components?
Hooks allow for declarative behaviour that’s harder to model other ways. With hooks, something like declaring when an event listener should be in play becomes much cleaner. The alternatives with class components are messy.
The above criticism that you don’t get to have pure functional components anymore doesn’t really make sense to me - either you have some lingering state to deal with, or you write a pure function. Your hand is forced by the problem. You could switch over to class components but they’re really not much clearer to read.
Most of the bugs I’ve seen have been around JavaScript’s crummy equality checks and the need for more memoisation.
The problem wasn't the fact that components were classes. The problems were the React lifecycle methods. People did some crazy things with instance variables, shouldComponentUpdate, and componentDidUpdate, and especially the deprecated componentWillReceiveProps.
very much this. I had to refuse a lot of code because developers were using these in inconsistent, confusing, and actually incorrect (read buggy) ways.
I found class components really....wordy binding this to this all the time and lifecycle methods could become kinda wild after a while doing all these checks for a bunch of things...and imo that just trended towards these bulky spaghetti class components / lifecycle methods.
I think the use of classes was just some sugar to help OO and Java people (like myself) into React components. They're not really classes in the useful sense, and I found my components littered with functions that returned blobs of JSX that felt too small to be factored into full "classes".
Smaller function components and then adding state with `useState` has simplified my code.
The React team’s claim (see my other comment) has been that by using React at all, you are already fully bought into all this magic, it’s just harder to tell.
Absolutely 100% agree; Reactive programming environments become easily unmanageable unless they are Functional Reactive Programming for the exact reason you state. Hooks are a way to manage that complexity from an imperative perspective with the illusion of a declarative veneer.
Basically every call of a functional component MyComponent() represents a new scope. When you're working with a hook such as useEffect(), you have to pay attention to the dynamic scope so and correctly trigger the useEffect() with the dependency array.
Correct, but the callback passed to useEffect is only scoped to the call stack triggered by the dependency array. So, this makes the callback passed to useEffect dynamically scoped.
The callback is always defined it's just not always invoked. Otherwise it's a regular lexical closure like any callback in JavaScript. Maybe I'm not understanding what you mean by dynamic scope.
Yes, it's just regular lexical closure. I don't think his comment was literal since JS itself is lexically scoped. If you think about how `this` works in JS, it's very similar to dynamic scoping because it matters where the function is called.
With hooks and dependency arrays, similar to dynamically scoped languages, it matters where the function is called.
Thing is, people rarely write functions and use `this` is in dynamically scoped ways anymore. Remember when you had to explicitly bind function scope everywhere? With ES6 and arrow functions, I don't miss that.
Yes. Except mobx ;)
I like to use function components together with class based observable view models. Only a single hook wires them up. Works like a charm and avoids all the IMHO confusing hook complexity.
Hmm, so the reusable bit is the straightforward inject-everything component, driven by an app-specific, app-aware hook-using part?
I can see how that can work for simple cases. Nesting components is going to get tricky though if the classes don't operate exactly the way the hooks expect.
Of course that's the problem: someone built hooks for their trivial cases and now they're the 'preferred' approach...
Edit: To clarify, 'simple' is going to be context-dependent since hook behaviour is. If your 'driving skeleton' of hook-based components is in the direct uninterrupted ancestry chain of every class component, you're probably using hooks in a near-ideal case.
Bad shit happens to a minority of people every day, permanently disrupting their lives and forcing them to abandon long-term plans. The majority remain oblivious and see an enduring status quo.
Suddenly a Black Swan craps on everyone at once, and a great many people are whining 'why is this happening to me' and 'when will someone fix this so I can go back to my routine'.
Guess what? Your routine is probably fucked. Throw it out, get used to the new normal, and accept that no one knows how to fix this (yet?), just like every other time we get hit by a context-changing problem.
Of course some people get too attached to their context, and those who come later and find their remains might label such events as 'out-of-context problems'...
"People don't consider my games worth what I spent to make them, so I'll sneak in adverts too so I don't have to charge them more and pop their little bubble, so they can continue thinking games are this cheap because the sticker price is still low."
No, this is not acceptable, and holding it up as a Good Thing that it's being perpetuated is stupid.
Microsoft have seen the writing on the wall (because it's been there for, I dunno, at least a decade?) and are saying 'you have to have an actual workable fucking business model now, k?'
What people don't consider his games worth money? He offers them for free with ads and a lot of people happily accept. Were the marketplace rules changed to exclude all free games he may charge for his and people may buy them. I don't see a value judgement for the worth of his games here.
The f2p market will continue to slow [1] and this guy bought himself some time in his market. Good for him. A dead end story for the rest of us.
Sounds reasonable to me - how else is he going to pay bills if not through some ad revenue? You know glorified 9 to 5 jobs are cool, but some of is indies prefer freedom - and that costs money.
Then you charge a reasonable price. You don't put a 'free' sticker on it with an unknowable 'ad revenue' cost in the background.
One might argue that the customer should know the price they pay in advertising data-scraping before logging in, these days, but we still convict people of fraud for taking advantage.
Television has been paying for programming through ads for decades now. Do you think that's not a 'real' business model? A good business model is one that makes someone 7 figures a year for cloning solitaire. It might not be the most enjoyable gaming experience, but if purity of solitaire is what you want, then there are plenty of paid alternatives.
A massive difference here is that TV ads don't come with ubiquitous invasions of privacy every time you see them, nor do they have a small but constant risk of malware infecting your TV.
> Channel 4 earlier this week unveiled a new video on demand advertising package allowing brands to directly address viewers - in practise this meant first adopters 20th Century Fox, Foster' and Ronseal, could grab the attention of by literally calling out their names in their creative.
> nor do they have a small but constant risk of malware infecting your TV.
The same goes for anything that you install / access on your devices, yet we aren't talking about removing theses capacities and I sure hope so that you won't argue that.
That just means you'll go out of business then and the users will end up using a Chinese/Russian/other country (knockoff) instead with the same ads snuck in there. It costs a lot less to develop software in certain other countries.
Justification doesn't exist in a vacuum, what you're justifying matters just as much, and showing ads from Microsoft is pretty low on the scale of evil things apps do for monetization.
This is what happens when participating in a race to the bottom. Mobile games are cheap because (due to current global economics) there is always someone prepared to clone your design and sell it cheaper. If you want to pay the bills, you need to find a market you can be profitable in. That might even be mobile gaming, if you can deliver IP or a franchise or a series that gets people to pay a reasonable price for your game rather than a bargain price for something similar to your game. Heck, if you get the IP right like Angry Birds you even get to undercut your competitors with a free price tag and no ads, driving your T-Shirt sales.
“Heck, if you get the IP right like Angry Birds you even get to undercut your competitors with a free price tag and no ads, driving your T-Shirt sales.”
Apparently you haven’t played Angry Birds 2. It’s riddled with upsells.
sigh If your application is slow, odds are it's not because you used exceptions.
If you're throwing enough exceptions for this to matter it'll show up on a profiler, and then you can change that specific chunk of code to avoid treating that particular case as 'exceptional'.
Tim Sweeney, creator of Unreal Engine, Epic Megagames, etc, recently had a code base where exceptions were causing problems even when not actively being thrown:
>They weren’t throwing and a disassembly showed no visible artifacts of exceptions. But turning off the possibility of throwing exceptions gained 15% and just made the assembly code tighter with no clear pattern to the improvements.
I am very familiar with this effect. We could of course use effects like this to cast doubt on any benchmark someone has ever run, unless they specifically mention that they tested for this, and 100 other bench marking gotchas. We can, if we like, assert that all things are unknowable, while at the same time asserting that the thing we want to be true is for sure true.
However, it appears to be a rather commonplace occurrence that having exceptions on, even if you aren't using them, can cause performance problems, it isn't just a one of in Tim's case. Also Tim has quite a bit of experience working on bleeding edge C and C++ performance code so there is a good chance he did account for this. You can ask him.
How common is it, actually? And on what platforms/architectures? It was a common problem back in the day when most code was compiled for x86, since exceptions weren't designed to be zero-cost in that ABI.
For what it's worth, the article itself has this bit:
"Thanks to the zero-cost exception model used in most C++ implementations (see section 5.4 of TR18015), the code in a try block runs without any overhead."
It has been at least a 10% effect in the last two things I profiled, which were a simple software rasteriser and a distributed key value store. The other significant benchmarking gotchas I see are: 1) the CPU being in a sleep mode when the test starts and takes a while to get running at full speed, and 2) other stuff running on the machine causing interference. But these two are easy to work around compared to the alignment-sensitivity problem.
I didn't mean to disagree with the conclusion. My point was more that it is hard to be confident in the causes of results like this. It'd be great if we had tools that could randomise alignments etc, so we could take lots of samples and get more confidence. As far as I know those tools don't exist and we just have to use experience.
This sounds like a straight up optimizer bug, and should be reported as such.
And if profiling does show up a case like that, the appropriate response to that is slapping noexcept on the function, not disabling exceptions globally.
Indeed, and this was measured, and the improvement was worth it. Profiling! :P The cost isn't always where we think it is.
Still, 'turn off exceptions because exceptions are slow' is a daft rule of thumb for the majority of software, where the slowness probably has more to do with choice of data structures, etc than compiler/platform implementation of language features.
I think that in theory there should be no difference between code that can throw and that can't from the point of view of the optimizer. In practice I think that sometimes some code motion passes are disabled in the presence of abnormal control flow because the compensation code that would be otherwise required becomes unwieldy and hard to prove correct.
Some case might be “exceptional” in the sense that it doesn’t happen most of the time (and thus doesn’t show up on regular profile checks), but when it does happen the failures are highly correlated and suddenly you’re throwing thousands of exceptions per second all over the place.
This is also the time one finds out that the exception handling code on typical C++ runtimes take out a global lock and the multithreaded application grinds to a halt—the raw CPU cost of an exception is not the most pressing problem at this point.
malloc takes out a global lock and performance doesn't 'grind to a halt' from a single allocation. Hundreds of thousands of allocations per second per core and concurrency will suffer, but most programs either don't actually have nearly enough concurrency or minimize their allocations to the point where it isn't a primary bottleneck.
I think the context you’re working in matters heavily with such an assertion. In fact, it is quite common in game development to bulk allocate and then use custom allocators within these regions specifically avoid the drag on performance and determinism inflicted by malloc.
Bulk allocations with custom allocators would either be exactly what I said (low malloc calls) or would just not use malloc at all and use the system memory mapping functions.
The larger point was that an exception being thrown and taking a global lock is not going to tank performance. That would imply that all other threads are trying to take the same lock at the same time and that the thread that gets it holds it for a long time.
Even in the case of malloc, where there could be actual heavy lock contention, this is not always a bottleneck.
Of course, it would depend on the workload. An exception here and there wouldn’t matter, but for a somewhat contrived example, using exceptions for something like a bad connection to a database on a server with dozens of threads could easily turn into a concurrency and a resource exhaustion problem (where every thread starts seeing the same error at the same time).
All modern common implementations (I think). MSVC's malloc definitely locks. I think gcc, clang and icc do too. This is a big reason why tcmalloc and jemalloc were created.
I don't know about MSVC, but glibc malloc has per thread caches. At some point you need to hit the global allocator or sbrk or mmap of course and that might take a global lock.
Looking at the disassembly the machine code is ~2x the size for the exception versions, but most of it is on the cold path.
The exception version has a conditional branch to do the "== errorInt" part. The non-exception version manages to avoid the conditional branch by using a conditional move, which would avoid a pipeline stall on a branch mis-prediction.
Edit: I think this disproves desc's point ("If your application is slow <because of execptions> it'll show up on a profiler"). ie there's probably a small cost to exceptions even when they are not taken and it will be spread across your entire program and will not show as a single spike in a profiler.
> The non-exception version manages to avoid the conditional branch by using a conditional move, which would avoid a pipeline stall on a branch mis-prediction.
branches are usually superior to conditional moves for predictable conditions as they break dependency chains. In case the exceptional code path is taken, the cost of the misprediciton is dwarfed by the cost of unwinding the stack.
This is interesting actually, the fact that the compiler uses a conditional move in the error checking case could mean that the compiler has no useful branch probability model for that branch in the error checking case, but even when using __builtin_expect, the compiler still prefers the conditional move.
Agner Fog is the usual go-to reference. For this specific case, you can also google any of Linus rants on conditional moves (they used to be very high latency, although today they are not so much of an issue). This one for example: https://yarchive.net/comp/linux/cmov.html
It is complicated to describe when cmov is slow and when it is fast. As a rule of thumb, if the next loop iteration data operations depend on a cmov in this one, and around, cmov will be slow. If not, it is very, very fast. Use of cmov can make quicksort 2x as fast.
Gcc absolutely won't generate two cmov instructions in a basic block. Clang, for its part, abandons practically all optimization of loops that could conceivably generate a throw.
The problem with benchmarks is that I never see any that estimate the impact of the extra code size on programs the size of, say, Photoshop. It takes annoying long to load such a program. Is code size part of that problem? Probably. Is the bloat added by exceptions significant? I'd like to know.
When it takes a program too long to load, it is because the program is doing too much non-exception work. The exception-handling code is not even being loaded unless it's throwing while it loads, which would just be bad design.
Interesting, GCC 7.x seems to simply puts the cold branch on a separate nop-padded cacheline.
GCC 9 [1] instead moves the exception throwing branch into a cold clone of exitWithMessageException function. The behaviour seems to have changed on starting from GCC 8.x.
Ooo, fancy. There is still a long way from just that to actually getting the savings in a real program running on a real operating system. For example, if I have thousands of source files, each with a few hundred bytes of cold exception handlers, do they get coalesced into a single block for the whole program?
Code paths introduced in order to execute any potential stack unrolling are inefficient and they make your code slow. Especially tight loops. This was common knowledge back in 2000s.
Common knowledge, but not correct. Code to destroy objects has to be generated for regular function returns, and is jumped into by the exception handler too. Managing resources by hand, instead, would also require code, but you have to write it. Its expense arises from its fragility.
This one adds functions that call the exception-based and error code based functions in a simple for loop. Both handle the error.
Unless I've screwed up somewhere, I think the result is that in the exception case, the body of the inner loop contains 13 instructions, while the error code case contains 5.
Also, the generated code for the exception case is harder to read and understand. When writing performance critical code I like to eye-ball the disassembly just to make sure the compiler didn't do anything unexpected. This task is hard enough already in non-trivial functions, I certainly don't want it getting any harder.
Also generally having slow code is less a problem then expensive code - exceptions can allow you to use more succinct expression that will lower long term maintenance costs - the full lack of exceptions is one of the things that I think continues to impair Go which has a lot of other neat ideas (and a really good way to deal with sub-addressing in the form of slices).
Panic/recover are much more limited and while you can build an exception system from them (much like all turing complete things can be all other turing complete things) it is a pain and extremely inefficient. So panics do exist and are used for I/O errors sometimes but it's quite inconsistent.
This could be done, if everyone involved were willing to accept massive risk to the lives of the astronauts (including the astronauts, obviously).
By which I mean, after hurriedly building rockets and testing them, finally going ahead with a backup rocket and crew to meet the deadline if the first one blows up on the pad.
With the risk of the backup doing the same...
If they're simply gambling on technology and engineering these days 'being faster', they're doomed to failure and I hope the individuals involved don't wind up killing anyone.
It happens because someone doesn't understand that search rankings vs. queries have to have a 'smooth' solution, and thinks they can tweak a few weightings here and there to get the results they think are 'correct' for the very specific cases they have in mind, without realising that this has a knock-on effect on the entire scoring space and you get odd singularities and stuff popping up as a result.
The people in question should not be touching anything even distantly related to a computer in any professional context.
I try not to say things like that anymore, you've never met the person and have no idea of the context or their skills.
Could be that they were forced to change it at the 11th hour just before Windows 10 was being demo'd and it was never changed back. Could be a committee decision. Could be the programmer hated the surveillance installed by MS and so deliberately made the search bad.
The 'someone' in question probably wasn't the engineer.
I'd bet the engineer was listening to the 'someone' saying things like 'why don't you just' or 'it just needs to do this', or other things including the word 'just'.
I'd bet the engineer tried to explain the realities of how this stuff works, to 'someone' who's job didn't involve understanding what they were asking for, and reacted by demanding with the word 'just' instead of pretending to ask.
Can't say I ran into this particular bug, but I tend not to trust the search on Windows much anyway. I myself have implemented better text/prefix matching and ranking using a very simple SQLite database schema, on a far larger dataset than 'the local machine', yet Microsoft repeatedly fail at a task which should succumb easily even to brute-force approaches.
They don't get to claim that it's a harder problem than indexing a hard disk without first explaining why they think they should retain the use of their fingers for continuing to send search terms over a network after I, as the owner of the hardware and 'owner' of the OS instance have made a concerted effort to turn that shit off.
Jeffrey Snover[1] likes to say that "Microsoft is incapable of sustained error", yet they've been incapable of building a good search for at least 15-20 years that I've experienced. Bing search is no competitor to Google, Explorer filesystem search is slow and untrustworthy, the search which integrates desktops to Windows server indexing of fileshares is slow and untrustworthy, on-premises SharePoint search has never worked well for finding documents for me, Start Menu search is the worst at matching plain text strings in short names year after year, Windows version after Windows version, local Outlook Search wasn't great, Outlook search backed by Exchange isn't great, and it goes easily back to Joel on Software's blog post[2] of 2004 where he says
> Just do me a favor and search the damned hard drive, quickly, for the string I typed, using full-text indexes and other technologies that were boring in 1973.
That's still what I want, and still what they don't do.
[1] PowerShell originator, now "Chief Architect for the Azure Infrastructure and Management Group" at Microsoft.
Here's the thing: I've built, back in 2005, a full-text search for 60 GB of small-text data that would typically return results in under 15ms. This was using 100% Microsoft-native technology, like C# and SQL.
How much data does the Start Menu search have to index, total? A few 100KB? Something like that. This is assuming a fresh install, no documents, no third party apps.
A modern CPU can simply brute force through that amount of data and perform nearly arbitrary subtext matching, without any fancy indexing techniques, in something like a handful of microseconds. Even in an absolute worst-case of cross-process pointer chasing out into uncached main memory, you're talking a few tens of milliseconds, max.
Yet, a fresh install of any version of Windows, including Server editions, simply flat out fails to find control panel items or shortcuts even if their names are typed exactly. Many hours later it'll find some of them, not necessarily all. Days later, perhaps it'll reach a 100% match, but that's certainly not guaranteed.
For me, often having to use freshly installed Windows VMs, I see a nearly 100% failure rate for this component.
The second half of the quote is, “but sometimes it takes us a couple decades to get right”. :-)
Seriously though, have you checked out the search in O365?
It is starting to get really good and wait till you see how this progresses in the next couple of years.
The problem is that this is a bunch of bollocks. But at the same time, that might actually resolve my problem with it as the pain gets fed back.
If one is very very lucky, a library will have accurate changenotes explaining what the version increment actually means, distinguishing between security updates and wanking-over-new-subdependency-for-shiny-irrelevant-features updates.
However, if people are penalised for not wasting their time chasing the latest version of leftpad-with-custom-characters-except-in-locations-which-are-prime-or-square.0.4.3-beta.butnotreally, maybe we'll see shallower dependency trees in the important stuff.
Where 'important' ends up being defined as 'the packages which everyone else gravitates to, and therefore can't be avoided'.
Ideally we'd see security updates for previous major versions of things, for those of us without feature addiction, but that would demand more of the devs producing this crap.
React Hooks are a fucking stupid idea and always were.
They're basically just adding dynamic scoping to a language and framework which doesn't need it, in one of the most 'magical' and confusing ways possible. You have to care about execution order to understand exactly how they'll all work and that will bite you eventually if you're building anything without knowledge of all the contexts in which it's called.
There's a reason that most languages stick to lexical scoping: you can see the dependencies, in the same file.
And a large portion of the value of functional languages is that they avoid state, making magic-at-a-distance impossible.
Boilerplate is not the problem. Magic is the problem.