[Scons-users] Performance of copying from cache
a.cavallo at cavallinux.eu
a.cavallo at cavallinux.eu
Wed Jun 12 07:54:17 EDT 2013
Are you using any specific source code control (cvs-like)? If so almost all
support "cloning" and they should provide checks for devs changes .. unless there
are hardcoded paths (and you cannot relocate files).
On Wed 12/06/13 11:31, "Tom Tanner (BLOOMBERG/ LONDON)" ttanner2 at bloomberg.net wrote:
> we need to build and test for multiple architectures so it's most
> convenient to have the source on NFS. Otherwise a dev will make a change,
> then have to ensure they've updated things so they can see the change on
> the other architecture.
> This doesn't tend to work to well.
>
> ----- Original Message -----
> From: a.cav
> allo at cavallinux.eu
To: scons-u
> sers at scons.org
At: Jun 12 2013 12:25:12
>
> Mmm. I suppose rsyncing to a local mounted dir would be better solution: is
> there
any special reason for the source being stored on a nfs mount? Are you
> using
clearcase? If I rember right it does support local snapshots as well.
>
> I hope this help
>
>
>
>
>
>
>
> On Wed 12/06/13 11:16, "Tom Tanner (BLOOMBERG/ LONDON)" ttanne
> r2 at bloomberg.net wrote:
> So, we have a biggish build, which we tend to run
> with -j 2 or 4, with some
> large files on it and we've noticed that copying
> files out of our NFS cache
> can take ridiculous amounts of time (and we're
> talking about 10+ minutes,
> though AIX seems to be a lot worse than solaris or
> linux)
> Having a hunt round, I discovered that shutil.copy2
> copies 16k at a time,
> which doesn't seem terrifically efficient (it would
> appear I'm not the only
> person who thinks that). So I took a copy of that,
> used a 1M buffer, and it
> reduced my worst case copy to 11
> seconds.
> But the thing that really improved worst performance
> was replacing that
> with "cp -p". However, that hosed overall
> performance.
> Has anyone any suggestions? Would it be saner to
> just read the while file
> at once (although as some of the files are quite
> large, that might be
> painful).
> _______________________________________________
> > Scons-users mailing list
> > Scons-use
> > rs at scons.org
>
http://four.pairlist.net/mailman/listinfo/scons-users
>
> >
>
>
> _______________________________________________
> Scons-users mailing list
> Scons-use
> rs at scons.org
http://four.pairlist.net/mailman/listinfo/scons-users
> _______________________________________________
> Scons-users mailing list
> Scons-use
> rs at scons.org
http://four.pairlist.net/mailman/listinfo/scons-users
>
>
More information about the Scons-users
mailing list