This video presents a new approach to sound composition for soundtrack composers and sound designers. We propose a tool for usable sound manipulation and composition that targets sound variety and expressive rendering of the composition. We first automatically segment audio recordings into atomic grains which are displayed on our navigation tool according to their timbre. To perform the synthesis, the user selects one recording as model for rhythmic pattern and timbre evolution, and a set of audio grains. Our synthesis system processes then the chosen sound material to create new sound events based on onset detection of the recording model and similarity measurements between the model and the selected grains. A large variety of sound events such as those encountered in virtual environments or other training simulations. Companion video to the Audio Mostly 2010 conference paper "Towards User-friendly Audio Creation".