Let me see if I can understand the question…
[[ hmm. In trying to explain what I do to fix a bug I ending up describing my normal workflow. I suspect your “looks somewhat awkward” is going to switch to “very awkward”. Perhaps this will be a good baseline to suggest where you want to put the “lightweight” fixes. ]]
In my experience, few changes are really as simple as you think at first. If you haven’t run the full process including testing on all platforms then you probably will break something.
Internally we typically have 2 main integration trees.
bugfix: Which contains the last release and any bugfixes that have accumulated since that release. Releases from here are usually a bk-X.Y.Z release.
dev: Which contains larger more risky changes that will take a while before we are ready. This will probably be the next bk-X.0 or bk-X.Y release.
I usually have clones of these on my local machine for fast access.
If a simple bugfix problem is reported in the current release my workflow is usually like this:
bk clone bugfix bugfix-fixname
... create fix, run regressions, update documentation ...
bk citool # create a new cset
- That local clone will use hardlinks and so is reasonably fast. (0.6s on my machine)
- It is true that I am working in a new directory and so I need to rebuild everything. We have tweaked the build so the expensive part (tcltk) is cached in
/build/obj and I use ccache so a full build is about 20 seconds on my machine. In some environments being able to work in an already built tree is really important. And git’s branches can make this better.
Now at this point, I could push this fix to the integration tree, but that isn’t really how we tend to work. We have a peer review process.
Now this cset is on my local machine and it needs to be put in a unique repository on the remote machine. You could do that with ‘bk clone’, but that is pretty expensive with a slow net connection so you want to use ‘bk push’. But for push to work I need a matching baseline repository on the remote machine.
That looks something like this:
On remote machine:
bk clone -r$BASEREV /home/bk/bugfix /home/bk/wscott/bugfix-fixname
On local machine:
bk push bk://work/wscott/bugfix-fixname
(but I have a script that do this with a couple more tweaks)
Then I create an entry on our internal RTI review system and people make comments and push fixes. We also run “crankturns” on a collection of these RTIs on a pile of machines running different processors and operating systems. In the process I might make a series of fixups to my ‘fixname’ RTI and other people might also push fixups. In the process the history gets messy. Eventually, the RTI gets approved.
At that point I would usually do something like this (on my local machine):
bk pull bk://work/wscott/bugfix-fixname # fetch any updates from reviewers
bk pull # fetch new stuff from tip
bk collapse -e@ # collapse csets together
bk citool # recreate cset and fix comments
bk push # push to my local master
bk push # push to official tree
One change to this process we have in mind to to remove this overhead of having to maintain a remote copy of the cset on the integration tree. Just let the RTI system store the repository directly.