If you want to further improve the throughput of the Rust quicksort implementation, what are the options you may consider?
Answer
Concurrency may not be an option since the sorting logic is non-blocking.
One of the options is parallelism.
We can leverage the number of cores of the target system to improve the throughput further.
The rust slices can be split into regions and can be safely offered to parallel processes.
Rayon is a data-parallel library for Rust that can be explored, where the partition logic can be offered as “tasks” to different CPU cores.
The picture for this Chow is an abstract representation of this idea.
2 Responses
Velocity of each individual iteration will be a different figure. There are many ways velocity gets impacted. Apart from planned absence (planned leave, training etc.) and holidays, there could be unplanned absences caused by illness, personal emergency etc. which impact velocity. User stories that do not get completed in an iteration get moved to next iteration. This brings down the velocity of the iteration where the story was started and bumps up the velocity of the iteration where it got completed. This being the situation, good practice is to take an average of last five or six iterations as the velocity of the team. Team stability is another factor that impacts velocity. Teams that have higher churn will see higher volatility in velocity. Other factors such as change in technology, adoption of new tools, increase in automation, will also impact velocity either positively or negatively! However, if team is stable and has reached “performing stage” steady rise in average velocity will be seen over a period of time till any of the factors mentioned above comes into play and impacts it.
Thanks Milind, fully agree with your comment.
Finally, irrespective of the increasing trend in velocity, there is improvement for sure. This cannot be missed, if observed. One of the intent of my blog is to encourage this observation, by taking a mildly provocative stand.