Automated tasks should be scheduled not on how frequently the underlying data changes, but by the organization’s tolerance for out of date data once it does change.
Twice this week I’ve been in conversations discussing how frequently a script should run. In both contexts, the first question was “how frequently does the data change?” In both cases, large changes happen about three times a year, with small one-off changes happening infrequently in-between. But the better question is “once a change happens, how much delay can we tolerate before it is updated throughout the system?” How frequently data changes is a red herring. We want to act off of how quickly do people need to see the changed data.