Response in seconds if you hit partitions (a version of index that only support equality and its implemented in folders in this case) and your storage system contains content metadata like parquet or orc.
Is not a BI tool tho, if queries have high variability on the where clause and you can't leverage indexes then you're looking at minutes as response time.
If the data is not in structures but plain csv/json, all bets are off.
I've not yet tested it at terabyte scale albeit it should happily scale up there.
I have not put it through any stress tests. I look at this kind of tool as a nice convenience. If I needed something high throughput I'd probably want a full baked data warehouse pipeline.
It really, really, realllllyyyy, depends on how you setup your prefix/"folder" structure and the underlying file format. Though that's almost certainly true here.