Electric vehicle (EV) charging stations represent a substantial load with significant flexibility. Balancing such load with model-free demand response (DR) based on reinforcement learning (RL) is an attractive approach. We build on previous RL research using a Markov decision process (MDP) to simultaneously coordinate multiple charging stations. The previously proposed approach is computationally expensive in terms of large training times, limiting its feasibility and practicality. We propose to a priori force the control policy to always fulfill any charging demand that does not offer any flexibility at a given point, and thus use an updated cost function. We compare the policy of the newly proposed approach with the original (costly) one, for the case of load flattening, in terms of (i) processing time to learn the RL-based charging policy, and (ii) overall performance of the policy decisions in terms of meeting the target load for unseen test data.