From a storage perspective, many of the NoSQL techs claim to handle mass volumes of parallel requests and data with ease. In the real world, however, most techs have similar system resource limitations (threading, CPU, memory, I/O, etc) that all contribute to latency/stability issues. Though many symptoms can be masked by horizontal scaling...at the end of the day, the more efficiently your applications use resources, the better things scale out...
I first encountered this use case when storing high volume messaging in Oracle. The issue is that transaction commits are expensive and committing a single message at a time simply doesn't scale. Instead, we switched to aggregating messages in memory and then passed these to Oracle via an array. The net result was dramatically reduced load against the database, negligible delays in processing data and a theoretical increased chance of message loss...overall, this addressed our performance issues with relatively little impact.
Then, as we moved into NoSQL solutions (Cassandra, HDFS, ElasticSearch, etc), I revisited the need for batching...it was obvious that the same batching strategy still applied to these new techs as well...
Luckily, we are working with Apache Camel and its implementation of EIPs (Enterprise Integration Patterns) make solving these types of issues fairly straight forward. In particular, the split and aggregator patterns are designed for just this type of message flow.
For example, let's say I have 3 systems that process messages...A, B, C. System A produces messages and sends them to system B. system B does some processing and then sends them to system C for final processing.
in Camel, this is expressed by the following simple route (assuming ActiveMQ is used as a message bus in between systems)
now, if we find that system B requires batch sizes of 100. I can easily batch these together using a simple aggregator as follows...
Given the config above, we'll pass on a batch after 100 messages are aggregated (completionSize) OR after 1000ms (completionTimeout)...the latter is key to limiting the delay in processing when volume is low. also note, the above would pass on a batch size of 100 to system C...
Now, let's assume system C prefers a batch size of 10 and must get groups for the same accountId...here are the changes required
As you can see, these are very flexible and easy to implement patterns for message flow. That said, you do have to give some consideration to memory/CPU usage and overall message reliability requirements when using these.
Overall, I've used this pattern to successfully scale out high volume requests to Oracle, Cassandra, HDFS and ElasticSearch...