6.12.5

Aus BC-Wiki
Zur Navigation springen Zur Suche springen

Zur Übersicht

  • Mac: Update XCode project for new source files client/current_version.cpp,.h. (checked in to boinc_core_release_6_12_4)
  • client: update STD of ineligible projects by decay only. Not sure why, but this eliminates gradual negative drift.
  • client: linux compile fix
  • client: small fix for GPU scheduling (use anticipated debt instead of STD)
  • scheduler: fix logic that deals with jobs that need > 2GB RAM. My change of 1 Oct ([22440]) required that such jobs be processed with 64-bit apps, on the assumption that 32-bit apps have a 2 GB user address space limit. However, it turns out this limit applies only to Windows (kernel and user mode share the 4GB address space; each gets half). On Linux, the split is 3GB user / 1 GB kernel. On Mac OS X, user mode and kernel mode have separate address spaces, each of them 4 GB.
  • manager: if attaching to existing account, don't check min passwd length
  • manager: fix non-translatable "0 bytes"
  • client and manager: fix notice titles
  • code cleanup: please use standard coding conventions
  • client: small initial checkin for new scheduling system. Keep track of per-project recent estimated credit
  • client: show --no_gpus option in --help
  • client: don't preempt GPU jobs in middle of time slice
  • client: fix problems with job scheduling policy.
  • Old:
    • job scheduling has 2 phases. In the first phase (schedule_cpus()) we make a list of jobs, with deadline-miss and high-STD jobs first. Keep track of the RAM used, and skip jobs that would exceed available RAM. Stop scanning when the # of CPUs used by jobs in the list exceeds the # of actual CPUs. In the 2nd phase (enforce_schedule()), we add currently running jobs (which may be in the middle of a time slice) to the list, and reorder to give priority to such jobs, and possibly also to multi-thread jobs. We then run and/or preempt jobs, keeping track of RAM used.
  • Problems:
    • suppose we add an EDF 1-CPU job to the list, then a MT job. We'll stop at that point because #CPUs is exceeded. But enforce_schedule() won't run the MT job, and CPUs will be idle.
    • Because the list may be reordered, skipping jobs based on RAM is not correct, and may cause deadlines to be missed.
  • New:
    • when making the job list, keep track of #CPUs used by MT jobs and non-MT jobs separately. Stop the scan only if the non-MT count exceeds #CPUs. This ensures that we have enough jobs to use all the CPUs, even if the MT jobs can't be run for whatever reason.
    • don't skip jobs because of RAM usage
    • skip MT jobs if the MT CPU count is at least #CPUs
  • Notes:
    • ignoring RAM usage in phase 1 can cause idleness in some cases, e.g. suppose there are 4 GB of RAM and the list has jobs that use 3 GB, but there are also some jobs that use 1 GB. I'm not sure how to fix this.
    • Maybe the 2-phase approach is not a good idea. We did it this way for efficiency, so that we don't have to recompute the job list each time a job checkpoints. But this is probably not a concern, and I like the idea of a simpler approach, e.g. reducing the policy to a single comparison function.
  • GUI RPC: parse GPU info, FLOPS from APP_VERSION records (client already sends this info)
  • manager: show app speed and task FLOPs estimate in task Properties
  • client: gpu_active_frac was being computed incorrectly, resulting in various scheduling problems
  • client: comment out update_rec() call
  • client: comment out a debug msg
  • MGR: Fix the event log so that it doesn't store the event log's size information when it is in a minimized state.
  • MGR: Fix the close dialog issue on wxGTK, apparently there is a hidden flag that governs the handling of the GTK callback function. Fixes #962 (Thanks for the patch cli)