OK, we're back!
As I mentioned in the previous post (published on February 9th), here at Spin Rewriter we always strive to further improve our server infrastructure and fine-tune our code. We're looking to deliver faster and better software that's more intuitive and more robust than ever before.
We've now taken advantage of some new opportunities to parallelize certain aspects of Spin Rewriter's existing spinning process. This means that we're now able to run certain parts of the spinning process on multiple processors (CPUs) on different servers at the same time instead of running it on just one server.
Here's a simplified explanation of the benefits this brings. Imagine you have an article with 20 sentences. We can either have our software detect the parts-of-speech on each of those sentences in a row, and if it takes our software one second to analyze every sentence, we're looking at 20 x 1 second = 20 seconds of waiting time for the user.
We can, however, break this article down into 20 separate sentences — and send each of these sentences to a different server in our grid at the same time. Each of those 20 servers then processes just one sentence, and only 1 second later we get the processed results back to the central system.
The result: The same article is now fully processed in just 1 second instead of 20.
And the result that matters much more than that: Our users are even happier! 😃
Of course the exact real-world situation isn't as cut and dry, but we were still able to roll out a significant improvement in speed and robustness of our spinning systems — and we hope you'll love it!
As I mentioned in the previous post (published on February 9th), here at Spin Rewriter we always strive to further improve our server infrastructure and fine-tune our code. We're looking to deliver faster and better software that's more intuitive and more robust than ever before.
We've now taken advantage of some new opportunities to parallelize certain aspects of Spin Rewriter's existing spinning process. This means that we're now able to run certain parts of the spinning process on multiple processors (CPUs) on different servers at the same time instead of running it on just one server.
Here's a simplified explanation of the benefits this brings. Imagine you have an article with 20 sentences. We can either have our software detect the parts-of-speech on each of those sentences in a row, and if it takes our software one second to analyze every sentence, we're looking at 20 x 1 second = 20 seconds of waiting time for the user.
We can, however, break this article down into 20 separate sentences — and send each of these sentences to a different server in our grid at the same time. Each of those 20 servers then processes just one sentence, and only 1 second later we get the processed results back to the central system.
The result: The same article is now fully processed in just 1 second instead of 20.
And the result that matters much more than that: Our users are even happier! 😃
Of course the exact real-world situation isn't as cut and dry, but we were still able to roll out a significant improvement in speed and robustness of our spinning systems — and we hope you'll love it!
Published on: February 24th, 2016
← Previous: Improved parallelization of CPU-intensive tasks