Skip to content
Extraits de code Groupes Projets
  • unarist's avatar
    0129f5ea
    Optimize FixReblogsInFeeds migration (#5538) · 0129f5ea
    unarist a rédigé
    We have changed how we store reblogs in the redis for bigint IDs. This process is done by 1) scan all entries in users feed, and 2) re-store reblogs by 3 write commands.
    
    However, this operation is really slow for large instances. e.g. 1hrs on friends.nico (w/ 50k users). So I have tried below tweaks.
    
    * It checked non-reblogs by `entry[0] == entry[1]`, but this condition won't work because `entry[0]` is String while `entry[1]` is Float. Changing `entry[0].to_i == entry[1]` seems work.
      -> about 4-20x faster (feed with less reblogs will be faster)
    * Write operations can be batched by pipeline
      -> about 6x faster
    * Wrap operation by Lua script and execute by EVALSHA command. This really reduces packets between Ruby and Redis.
      -> about 3x faster
    
    I've taken Lua script way, though doing other optimizations may be enough.
    0129f5ea
    Historique
    Optimize FixReblogsInFeeds migration (#5538)
    unarist a rédigé
    We have changed how we store reblogs in the redis for bigint IDs. This process is done by 1) scan all entries in users feed, and 2) re-store reblogs by 3 write commands.
    
    However, this operation is really slow for large instances. e.g. 1hrs on friends.nico (w/ 50k users). So I have tried below tweaks.
    
    * It checked non-reblogs by `entry[0] == entry[1]`, but this condition won't work because `entry[0]` is String while `entry[1]` is Float. Changing `entry[0].to_i == entry[1]` seems work.
      -> about 4-20x faster (feed with less reblogs will be faster)
    * Write operations can be batched by pipeline
      -> about 6x faster
    * Wrap operation by Lua script and execute by EVALSHA command. This really reduces packets between Ruby and Redis.
      -> about 3x faster
    
    I've taken Lua script way, though doing other optimizations may be enough.