You could take the data provided from each of these sources and combine them in a program that either is standalone and receives/displays the webcam feed itself, or runs alongside the webcam display and provides the additional elements.
If you wanted to display the data in textual form, you would need a window with some static controls that get updated values for each of the variables every designated amount of time (refresh time). If you wanted graphics, you could use GDI+ or even add a viewport for DirectX or OpenGL to render the data in a visual form, based on the real-time data.
|and display it in real time over top of the webcam feed|
I'm not sure if by "over top" you mean "above" (as in side-by-side, etc) or you mean "overlaying". If you mean the second one, and you wanted a new/custom/combined feed generated that includes the data in it (like a framerate counter seen in real-time graphics applications, etc), you would definitely need to use DirectX or OpenGL in a viewport, and use the data and the camera feed as input, combine them, and export a new custom output that has the camera video frame and incorporates the data in textual form.
In either case, the data from each of your other sensors would have to be input in your application using the means that are particular to each, I guess. So you would have to fashion a way for each of the data sources to provide their data in a common pool of variables, which your application will then use to construct and then provide the output you want.
Hope these thoughts help somewhat,