In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In particular, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamic of the physical process varies over time, each device must sample the real-time status of the physical system and send the status information to a base station (BS) so as to monitor the physical process. The dynamics of the realistic physical process will influence the sampling frequency and status update scheme of each device. In particular, as the physical process varies rapidly, the sampling frequency of each device must be increased to capture these physical dynamics. Meanwhile, changes in the sampling frequency will also impact the energy usage of the device. Thus, it is necessary to determine a subset of devices to sample the physical process at each time slot so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI and total device energy consumption. To solve this problem, a machine learning framework based on the repeated update Q-learning (RUQL) algorithm is proposed. The proposed method enables the BS to overcome the biased action selection problem (e.g., an agent always takes a subset of actions while ignoring other actions), and hence, dynamically and quickly finding a device sampling and status update policy so as to minimize the sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution in Beijing from the Center for Statistical Science at Peking University show that the proposed algorithm can reduce the sum of AoI by up to 26.9% compared to the conventional Q-learning method.