Imagine for a moment that a "sensor" is your eye. You look out in any particular direction (the forward or X-axis of the prim is the direction in which your eye is "looking"

. The "arc" value is actually 1/2 of the angle describing the "field of view" cone for your eye. If you have tunnel vision, the arc may be 15 degrees (for 30 degrees total angle left-to-right, top-to-bottom, etc). If you have an extremely wide field of vision, seeing everything in front of the plane perpendicular to your viewing direction, the arc would be PI/2. If your eye were "omniscient" (literally "sensing in any direction"

, the arc would be PI.
Now, when you look out in any particular direction, you may "see" a LOT of things. Many things may not interest you (and you can only actually "see" the closest 16 things of interest anyway), so the rest of the parameters help you narrow your "interest" criteria so you only get things you are interested in. Name lets you only see things with that exact name. Key is highly specific, only searching for that one specific thing with that unique ID. Type limits the kinds of things you are interested in, like scripted objects, agents (avatars), physical objects, etc. Finally, distance specifies that you are interested in closer things than farther away things.
What is returned from the sensor is an array of up to the closest 16 things which fit the criteria you gave for the sensor. If you keep the criteria narrow enough, then you will likely "see" what you are looking for, if it is there. The information that is returned includes:
llDetectedPos() - the position of the target, in region coordinates.
llDetectedRot() - a rotation describing the orientation of the target itself, unrelated to your eye position or rotation.
llDetectedType() - specifies what the target is.
llDetectedName() - the name of the target
llDetectedKey() - the unique UUID key of the target.
llDetectedGroup() - the unique UUID key of the group the target is presently set to.
llDetectedOwner() - the unique UUID key of the target's owner agent/avatar.
llDetectedVel() - the current velocity vector of the target, including direction and speed.
Now, if you want to figure out the position or rotation relative to your "eye", you have to compute that using your eye's position, and the llDetectedPos position returned from the sensor. If you want to "look" in a specific direction, you have to rotate your eye prim yourself.
Also, make sure you don't get your units mixed up for angles. "45.0 radians" is nonsensical, since there are only about 6.28 radians in a full 360-degree circle. The llSensor function DOES take arc value in radians, so if you wanted to have a 90-degree FoV cone, then you would pass 45.0 * DEG_TO_RAD.