Controlling over-constrained cable-driven parallel robots (CDPRs) is a challenging task due to the complex dynamics of the system. Classical controllers require force distribution algorithms that involve an optimization problem, which is time consuming. In this paper, we propose an AI-based approach that learns a controller from simulated trajectories. A dynamic model of the CDPR is first validated experimentally on a real robot. Then, the controller is trained on the CDPR simulator with randomly generated trajectories using a deep deterministic policy gradient (DDPG). Finally, the trained controller is tested on different trajectories. Validation results show that the proposed approach is able to track unknown trajectories with a good accuracy.