How to project 3D data into pixel coordinate?

Even simpler: all the examples you gave of “depth_registered” point clouds
are organized: point (u, v) corresponds to pixel (u, v) in the image.

If you want the points in a different RGB frame and you have the depth
image, it’s best to register the depth image into that RGB frame and then
calculate the point cloud (which will also be organized). Both can be done
using depth_image_proc.

If you have an arbitrary point cloud, you’ll have to use the pinhole camera
model as suggested above.

P.S.: The proper place to ask this kind of question is ROS answers.