Recent technological trends like hearables require the robust capturing of human speech not only for human-to-machine, but also for augmenting human-to-human communication. Consequently, reliable speech enhancement even in the presence of strong reverberation and interference is more important than ever. To cope with these conditions, the problems of source separation and dereverberation are considered jointly. The performance of the system is optimized through the incorporation of data collected by supplementary sensors such as bone conduction microphones, as well as relevant parameters that are estimated in advance. A novel direction-of-arrival (DOA) estimator that uses deep learning to localize sources accurately even for strong reverberation and at a large distance makes it possible to fully exploit the availability of multiple microphones. Using advanced adaptive beamforming techniques, the signal along the target direction can be extracted from the mixture with unwanted components. For ensuring the best possible synergy with the beamformer, the DOA estimation module will be fitted for the specific task of speech enhancement. Examples for further promising parameters are a characterization of the speech and noise properties as well as the acoustic properties of the environment, e.g., in terms of the reverberation time.