Please note: This is an advanced topic that requires technical knowledge beyond standard operations. To prevent the historical data from growing in the ftrack database there is a setting you can enable. Please have a look at the Configuration topic.
|
Can you run the following queries for us and let us know the results:
select count(*) from social_feed where socialid IS NULL;
select `action`, `insert`, count(*) as total from social_event group by `action`,`insert` order by total desc;
select count(*) from social_event where showid IS NULL;
select count(*) from social_event join `show` ON `show`.showid = social_event.showid where `show`.`status` = 'hidden';
Below is a script that can be used to cleanup events from projects that are hidden.
The tool will delete 1000 events per batch as bigger batches can hurt the database more and actually become slower.
You can run the tool with python:
pip install sqlalchemy PyMySQL
python cleanup.py mysql+pymysql://username:password@host-name:3306/ftrack
It will run until all events have been removed on projects that are hidden.
Note: Please test this on your staging server first and make sure you have a fresh database backup taken before you start.
Example script (cleanup.py):
import logging
import sys
import argparse
import time
try:
import sqlalchemy
except ImportError:
raise ImportError('sqlalchemy not found, please install using "pip install sqlalchemy PyMySQL"')
def main(arguments=None):
'''ftrack social event cleanup for hidden projects.
This tool will delete X rows from the social event table based on the
supplied limit. The rows are deleted in order (oldest first) and only for
projects that are hidden.
Use the tool by supplying the database URL as fisrt argument on the form
"mysql+pymysql://username:password@host-name:3306/ftrack".
'''
if arguments is None:
arguments = []
parser = argparse.ArgumentParser()
# Allow setting of logging level from arguments.
loggingLevels = {}
for level in (
logging.NOTSET, logging.DEBUG, logging.INFO, logging.WARNING,
logging.ERROR, logging.CRITICAL
):
loggingLevels[logging.getLevelName(level).lower()] = level
parser.add_argument(
'sqlalchemy_url',
help='URL to the database on the form "mysql+pymysql://username:password@host-name:3306/ftrack".'
)
parser.add_argument(
'-l', '--limit',
help='Delete query limit.',
default=1000
)
parser.add_argument(
'-d', '--delay',
help=(
'Delay between delete operations. Note that the delay is '
'important to ensure database stay responsive during delete operations.'
),
default=60
)
parser.add_argument(
'-v', '--verbosity',
help='Set the logging output verbosity.',
choices=loggingLevels.keys(),
default='info'
)
namespace = parser.parse_args(arguments)
logging.basicConfig(
format='%(asctime)s %(levelname)s:%(name)s:%(message)s',
level=loggingLevels[namespace.verbosity],
datefmt='%Y-%m-%d %H:%M:%S'
)
engine = sqlalchemy.create_engine(namespace.sqlalchemy_url)
query = """
DELETE social_event FROM social_event JOIN (
SELECT socialid from social_event LEFT JOIN `show` ON `show`.showid = social_event.showid WHERE (
`show`.`status` = 'hidden' or social_event.showid IS NULL
) order by social_event.created_at limit {}
) as temporary_event ON temporary_event.socialid = social_event.socialid;
""".format(namespace.limit)
connection = engine.connect()
total = 0
last_row_count = -1
while last_row_count != 0:
result = connection.execute(query)
last_row_count = result.rowcount
total += last_row_count
logging.info('Deleted {} rows. Now sleeping {} seconds'.format(total, namespace.delay))
time.sleep(namespace.delay)
connection.close()
if __name__ == '__main__':
raise SystemExit(
main(sys.argv[1:])
)