summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2012-01-11nilfs2: unbreak compat ioctlThomas Meyer
commit 695c60f21c69e525a89279a5f35bae4ff237afbc upstream. commit 828b1c50ae ("nilfs2: add compat ioctl") incidentally broke all other NILFS compat ioctls. Make them work again. Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Tested-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Change-Id: I125655948842b824d2c0c0032578e41071d952b6 Reviewed-on: http://git-master/r/74183 Reviewed-by: Varun Wadekar <vwadekar@nvidia.com> Tested-by: Varun Wadekar <vwadekar@nvidia.com>
2012-01-11NFSv4.1: Ensure that we handle _all_ SEQUENCE status bits.Trond Myklebust
commit 111d489f0fb431f4ae85d96851fbf8d3248c09d8 upstream. Currently, the code assumes that the SEQUENCE status bits are mutually exclusive. They are not... Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Change-Id: Idd21b5590495d004dd965433df6a089f46f682bd Reviewed-on: http://git-master/r/74180 Reviewed-by: Varun Wadekar <vwadekar@nvidia.com> Tested-by: Varun Wadekar <vwadekar@nvidia.com>
2012-01-11NFS: Fix a regression in nfs_file_llseek()Trond Myklebust
commit 6c52961743f38747401b47127b82159ab6d8a7a4 upstream. After commit 06222e491e663dac939f04b125c9dc52126a75c4 (fs: handle SEEK_HOLE/SEEK_DATA properly in all fs's that define their own llseek) the behaviour of llseek() was changed so that it always revalidates the file size. The bug appears to be due to a logic error in the afore-mentioned commit, which always evaluates to 'true'. Reported-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Change-Id: I4030564c1503d782139279e6741c819acbb0fb8f Reviewed-on: http://git-master/r/74179 Reviewed-by: Varun Wadekar <vwadekar@nvidia.com> Tested-by: Varun Wadekar <vwadekar@nvidia.com>
2012-01-05Linux 3.1.7Varun Wadekar
Change-Id: I99507d7cfdcee064f808856dc2ce99d806fd864f
2011-12-21fuse: fix llseek bugRoel Kluin
commit b48c6af2086ab2ba8a9c9b6ce9ecb34592ce500c upstream. The test in fuse_file_llseek() "not SEEK_CUR or not SEEK_SET" always evaluates to true. This was introduced in 3.1 by commit 06222e49 (fs: handle SEEK_HOLE/SEEK_DATA properly in all fs's that define their own llseek) and changed the behavior of SEEK_CUR and SEEK_SET to always retrieve the file attributes. This is a performance regression. Fix the test so that it makes sense. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Josef Bacik <josef@redhat.com> CC: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21fuse: fix fuse_retrieveMiklos Szeredi
commit 48706d0a91583d08c56e7ef2a7602d99c8d4133f upstream. Fix two bugs in fuse_retrieve(): - retrieving more than one page would yield repeated instances of the first page - if more than FUSE_MAX_PAGES_PER_REQ pages were requested than the request page array would overflow fuse_retrieve() was added in 2.6.36 and these bugs had been there since the beginning. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21ext4: handle EOF correctly in ext4_bio_write_page()Yongqiang Yang
commit 5a0dc7365c240795bf190766eba7a27600be3b3e upstream. We need to zero out part of a page which beyond EOF before setting uptodate, otherwise, mapread or write will see non-zero data beyond EOF. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21ext4: avoid potential hang in mpage_submit_io() when blocksize < pagesizeYongqiang Yang
commit 13a79a4741d37fda2fbafb953f0f301dc007928f upstream. If there is an unwritten but clean buffer in a page and there is a dirty buffer after the buffer, then mpage_submit_io does not write the dirty buffer out. As a result, da_writepages loops forever. This patch fixes the problem by checking dirty flag. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21ext4: avoid hangs in ext4_da_should_update_i_disksize()Andrea Arcangeli
commit ea51d132dbf9b00063169c1159bee253d9649224 upstream. If the pte mapping in generic_perform_write() is unmapped between iov_iter_fault_in_readable() and iov_iter_copy_from_user_atomic(), the "copied" parameter to ->end_write can be zero. ext4 couldn't cope with it with delayed allocations enabled. This skips the i_disksize enlargement logic if copied is zero and no new data was appeneded to the inode. gdb> bt #0 0xffffffff811afe80 in ext4_da_should_update_i_disksize (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x1\ 08000, len=0x1000, copied=0x0, page=0xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2467 #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 #2 0xffffffff810d97f1 in generic_perform_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value o\ ptimized out>, pos=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2440 #3 generic_file_buffered_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value optimized out>, p\ os=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2482 #4 0xffffffff810db5d1 in __generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, ppos=0\ xffff88001e26be40) at mm/filemap.c:2600 #5 0xffffffff810db853 in generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=<value optimi\ zed out>, pos=<value optimized out>) at mm/filemap.c:2632 #6 0xffffffff811a71aa in ext4_file_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, pos=0x108000) a\ t fs/ext4/file.c:136 #7 0xffffffff811375aa in do_sync_write (filp=0xffff88003f606a80, buf=<value optimized out>, len=<value optimized out>, \ ppos=0xffff88001e26bf48) at fs/read_write.c:406 #8 0xffffffff81137e56 in vfs_write (file=0xffff88003f606a80, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x4\ 000, pos=0xffff88001e26bf48) at fs/read_write.c:435 #9 0xffffffff8113816c in sys_write (fd=<value optimized out>, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x\ 4000) at fs/read_write.c:487 #10 <signal handler called> #11 0x00007f120077a390 in __brk_reservation_fn_dmi_alloc__ () #12 0x0000000000000000 in ?? () gdb> print offset $22 = 0xffffffffffffffff gdb> print idx $23 = 0xffffffff gdb> print inode->i_blkbits $24 = 0xc gdb> up #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 2512 if (ext4_da_should_update_i_disksize(page, end)) { gdb> print start $25 = 0x0 gdb> print end $26 = 0xffffffffffffffff gdb> print pos $27 = 0x108000 gdb> print new_i_size $28 = 0x108000 gdb> print ((struct ext4_inode_info *)((char *)inode-((int)(&((struct ext4_inode_info *)0)->vfs_inode))))->i_disksize $29 = 0xd9000 gdb> down 2467 for (i = 0; i < idx; i++) gdb> print i $30 = 0xd44acbee This is 100% reproducible with some autonuma development code tuned in a very aggressive manner (not normal way even for knumad) which does "exotic" changes to the ptes. It wouldn't normally trigger but I don't see why it can't happen normally if the page is added to swap cache in between the two faults leading to "copied" being zero (which then hangs in ext4). So it should be fixed. Especially possible with lumpy reclaim (albeit disabled if compaction is enabled) as that would ignore the young bits in the ptes. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21ext4: display the correct mount option in /proc/mounts for [no]init_itableTheodore Ts'o
commit fc6cb1cda5db7b2d24bf32890826214b857c728e upstream. /proc/mounts was showing the mount option [no]init_inode_table when the correct mount option that will be accepted by parse_options() is [no]init_itable. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21ext4: fix ext4_end_io_dio() racing against fsync()Theodore Ts'o
commit b5a7e97039a80fae673ccc115ce595d5b88fb4ee upstream. We need to make sure iocb->private is cleared *before* we put the io_end structure on i_completed_io_list. Otherwise fsync() could potentially run on another CPU and free the iocb structure out from under us. Reported-by: Kent Overstreet <koverstreet@google.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21hfs: fix hfs_find_init() sb->ext_tree NULL ptr oopsPhillip Lougher
commit 434a964daa14b9db083ce20404a4a2add54d037a upstream. Clement Lecigne reports a filesystem which causes a kernel oops in hfs_find_init() trying to dereference sb->ext_tree which is NULL. This proves to be because the filesystem has a corrupted MDB extent record, where the extents file does not fit into the first three extents in the file record (the first blocks). In hfs_get_block() when looking up the blocks for the extent file (HFS_EXT_CNID), it fails the first blocks special case, and falls through to the extent code (which ultimately calls hfs_find_init()) which is in the process of being initialised. Hfs avoids this scenario by always having the extents b-tree fitting into the first blocks (the extents B-tree can't have overflow extents). The fix is to check at mount time that the B-tree fits into first blocks, i.e. fail if HFS_I(inode)->alloc_blocks >= HFS_I(inode)->first_blocks Note, the existing commit 47f365eb57573 ("hfs: fix oops on mount with corrupted btree extent records") becomes subsumed into this as a special case, but only for the extents B-tree (HFS_EXT_CNID), it is perfectly acceptable for the catalog B-Tree file to grow beyond three extents, with the remaining extent descriptors in the extents overfow. This fixes CVE-2011-2203 Reported-by: Clement LECIGNE <clement.lecigne@netasq.com> Signed-off-by: Phillip Lougher <plougher@redhat.com> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Moritz Mühlenhoff <jmm@inutil.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21jbd/jbd2: validate sb->s_first in journal_get_superblock()Eryu Guan
commit 8762202dd0d6e46854f786bdb6fb3780a1625efe upstream. I hit a J_ASSERT(blocknr != 0) failure in cleanup_journal_tail() when mounting a fsfuzzed ext3 image. It turns out that the corrupted ext3 image has s_first = 0 in journal superblock, and the 0 is passed to journal->j_head in journal_reset(), then to blocknr in cleanup_journal_tail(), in the end the J_ASSERT failed. So validate s_first after reading journal superblock from disk in journal_get_superblock() to ensure s_first is valid. The following script could reproduce it: fstype=ext3 blocksize=1024 img=$fstype.img offset=0 found=0 magic="c0 3b 39 98" dd if=/dev/zero of=$img bs=1M count=8 mkfs -t $fstype -b $blocksize -F $img filesize=`stat -c %s $img` while [ $offset -lt $filesize ] do if od -j $offset -N 4 -t x1 $img | grep -i "$magic";then echo "Found journal: $offset" found=1 break fi offset=`echo "$offset+$blocksize" | bc` done if [ $found -ne 1 ];then echo "Magic \"$magic\" not found" exit 1 fi dd if=/dev/zero of=$img seek=$(($offset+23)) conv=notrunc bs=1 count=1 mkdir -p ./mnt mount -o loop $img ./mnt Cc: Jan Kara <jack@suse.cz> Signed-off-by: Eryu Guan <guaneryu@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Moritz Mühlenhoff <jmm@inutil.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21cifs: check for NULL last_entry before calling cifs_save_resume_keyJeff Layton
commit 7023676f9ee851d94f0942e879243fc1f9081c47 upstream. Prior to commit eaf35b1, cifs_save_resume_key had some NULL pointer checks at the top. It turns out that at least one of those NULL pointer checks is needed after all. When the LastNameOffset in a FIND reply appears to be beyond the end of the buffer, CIFSFindFirst and CIFSFindNext will set srch_inf.last_entry to NULL. Since eaf35b1, the code will now oops in this situation. Fix this by having the callers check for a NULL last entry pointer before calling cifs_save_resume_key. No change is needed for the call site in cifs_readdir as it's not reachable with a NULL current_entry pointer. This should fix: https://bugzilla.redhat.com/show_bug.cgi?id=750247 Cc: Christoph Hellwig <hch@infradead.org> Reported-by: Adam G. Metzler <adamgmetzler@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21fix apparmor dereferencing potentially freed dentry, sanitize __d_path() APIAl Viro
commit 02125a826459a6ad142f8d91c5b6357562f96615 upstream. __d_path() API is asking for trouble and in case of apparmor d_namespace_path() getting just that. The root cause is that when __d_path() misses the root it had been told to look for, it stores the location of the most remote ancestor in *root. Without grabbing references. Sure, at the moment of call it had been pinned down by what we have in *path. And if we raced with umount -l, we could have very well stopped at vfsmount/dentry that got freed as soon as prepend_path() dropped vfsmount_lock. It is safe to compare these pointers with pre-existing (and known to be still alive) vfsmount and dentry, as long as all we are asking is "is it the same address?". Dereferencing is not safe and apparmor ended up stepping into that. d_namespace_path() really wants to examine the place where we stopped, even if it's not connected to our namespace. As the result, it looked at ->d_sb->s_magic of a dentry that might've been already freed by that point. All other callers had been careful enough to avoid that, but it's really a bad interface - it invites that kind of trouble. The fix is fairly straightforward, even though it's bigger than I'd like: * prepend_path() root argument becomes const. * __d_path() is never called with NULL/NULL root. It was a kludge to start with. Instead, we have an explicit function - d_absolute_root(). Same as __d_path(), except that it doesn't get root passed and stops where it stops. apparmor and tomoyo are using it. * __d_path() returns NULL on path outside of root. The main caller is show_mountinfo() and that's precisely what we pass root for - to skip those outside chroot jail. Those who don't want that can (and do) use d_path(). * __d_path() root argument becomes const. Everyone agrees, I hope. * apparmor does *NOT* try to use __d_path() or any of its variants when it sees that path->mnt is an internal vfsmount. In that case it's definitely not mounted anywhere and dentry_path() is exactly what we want there. Handling of sysctl()-triggered weirdness is moved to that place. * if apparmor is asked to do pathname relative to chroot jail and __d_path() tells it we it's not in that jail, the sucker just calls d_absolute_path() instead. That's the other remaining caller of __d_path(), BTW. * seq_path_root() does _NOT_ return -ENAMETOOLONG (it's stupid anyway - the normal seq_file logics will take care of growing the buffer and redoing the call of ->show() just fine). However, if it gets path not reachable from root, it returns SEQ_SKIP. The only caller adjusted (i.e. stopped ignoring the return value as it used to do). Reviewed-by: John Johansen <john.johansen@canonical.com> ACKed-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21fs/proc/meminfo.c: fix compilation errorClaudio Scordino
commit b53fc7c2974a50913f49e1d800fe904a28c338e3 upstream. Fix the error message "directives may not be used inside a macro argument" which appears when the kernel is compiled for the cris architecture. Signed-off-by: Claudio Scordino <claudio@evidence.eu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-14Merge branch 'linux-3.1.5' into android-tegra-nv-3.1Varun Wadekar
Conflicts: arch/arm/Kconfig Change-Id: If8aaaf3efcbbf6c9017b38efb6d76ef933f147fa Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2011-12-09xfs: use doalloc flag in xfs_qm_dqattach_one()Mitsuo Hayasaka
commit db3e74b582915d66e10b0c73a62763418f54c340 upstream. The doalloc arg in xfs_qm_dqattach_one() is a flag that indicates whether a new area to handle quota information will be allocated if needed. Originally, it was passed to xfs_qm_dqget(), but has been removed by the following commit (probably by mistake): commit 8e9b6e7fa4544ea8a0e030c8987b918509c8ff47 Author: Christoph Hellwig <hch@lst.de> Date: Sun Feb 8 21:51:42 2009 +0100 xfs: remove the unused XFS_QMOPT_DQLOCK flag As the result, xfs_qm_dqget() called from xfs_qm_dqattach_one() never allocates the new area even if it is needed. This patch gives the doalloc arg to xfs_qm_dqget() in xfs_qm_dqattach_one() to fix this problem. Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Cc: Alex Elder <aelder@sgi.com> Cc: Christoph Hellwig <hch@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09xfs: Fix possible memory corruption in xfs_readlinkCarlos Maiolino
commit b52a360b2aa1c59ba9970fb0f52bbb093fcc7a24 upstream. Fixes a possible memory corruption when the link is larger than MAXPATHLEN and XFS_DEBUG is not enabled. This also remove the S_ISLNK assert, since the inode mode is checked previously in xfs_readlink_by_handle() and via VFS. Updated to address concerns raised by Ben Hutchings about the loose attention paid to 32- vs 64-bit values, and the lack of handling a potentially negative pathlen value: - Changed type of "pathlen" to be xfs_fsize_t, to match that of ip->i_d.di_size - Added checking for a negative pathlen to the too-long pathlen test, and generalized the message that gets reported in that case to reflect the change As a result, if a negative pathlen were encountered, this function would return EFSCORRUPTED (and would fail an assertion for a debug build)--just as would a too-long pathlen. Signed-off-by: Alex Elder <aelder@sgi.com> Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09xfs: fix buffer flushing during unmountChristoph Hellwig
commit 87c7bec7fc3377b3873eb3a0f4b603981ea16ebb upstream. The code to flush buffers in the umount code is a bit iffy: we first flush all delwri buffers out, but then might be able to queue up a new one when logging the sb counts. On a normal shutdown that one would get flushed out when doing the synchronous superblock write in xfs_unmountfs_writesb, but we skip that one if the filesystem has been shut down. Fix this by moving the delwri list flushing until just before unmounting the log, and while we're at it also remove the superflous delwri list and buffer lru flusing for the rt and log device that can never have cached or delwri buffers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Amit Sahrawat <amit.sahrawat83@gmail.com> Tested-by: Amit Sahrawat <amit.sahrawat83@gmail.com> Signed-off-by: Alex Elder <aelder@sgi.com> Cc: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09xfs: Return -EIO when xfs_vn_getattr() failedMitsuo Hayasaka
commit ed32201e65e15f3e6955cb84cbb544b08f81e5a5 upstream. An attribute of inode can be fetched via xfs_vn_getattr() in XFS. Currently it returns EIO, not negative value, when it failed. As a result, the system call returns not negative value even though an error occured. The stat(2), ls and mv commands cannot handle this error and do not work correctly. This patch fixes this bug, and returns -EIO, not EIO when an error is detected in xfs_vn_getattr(). Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com> Cc: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09xfs: avoid direct I/O write vs buffered I/O raceChristoph Hellwig
commit c58cb165bd44de8aaee9755a144136ae743be116 upstream. Currently a buffered reader or writer can add pages to the pagecache while we are waiting for the iolock in xfs_file_dio_aio_write. Prevent this by re-checking mapping->nrpages after we got the iolock, and if nessecary upgrade the lock to exclusive mode. To simplify this a bit only take the ilock inside of xfs_file_aio_write_checks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Alex Elder <aelder@sgi.com> Cc: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09xfs: don't serialise direct IO reads on page cache checksDave Chinner
commit 0c38a2512df272b14ef4238b476a2e4f70da1479 upstream. There is no need to grab the i_mutex of the IO lock in exclusive mode if we don't need to invalidate the page cache. Taking these locks on every direct IO effective serialises them as taking the IO lock in exclusive mode has to wait for all shared holders to drop the lock. That only happens when IO is complete, so effective it prevents dispatch of concurrent direct IO reads to the same inode. Fix this by taking the IO lock shared to check the page cache state, and only then drop it and take the IO lock exclusively if there is work to be done. Hence for the normal direct IO case, no exclusive locking will occur. Signed-off-by: Dave Chinner <dchinner@redhat.com> Tested-by: Joern Engel <joern@logfs.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com> Cc: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09ext4: fix racy use-after-free in ext4_end_io_dio()Tejun Heo
commit 4c81f045c0bd2cbb78cc6446a4cd98038fe11a2e upstream. ext4_end_io_dio() queues io_end->work and then clears iocb->private; however, io_end->work calls aio_complete() which frees the iocb object. If that slab object gets reallocated, then ext4_end_io_dio() can end up clearing someone else's iocb->private, this use-after-free can cause a leak of a struct ext4_io_end_t structure. Detected and tested with slab poisoning. [ Note: Can also reproduce using 12 fio's against 12 file systems with the following configuration file: [global] direct=1 ioengine=libaio iodepth=1 bs=4k ba=4k size=128m [create] filename=${TESTDIR} rw=write -- tytso ] Google-Bug-Id: 5354697 Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reported-by: Kent Overstreet <koverstreet@google.com> Tested-by: Kent Overstreet <koverstreet@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09eCryptfs: Extend array bounds for all filename charsTyler Hicks
commit 0f751e641a71157aa584c2a2e22fda52b52b8a56 upstream. From mhalcrow's original commit message: Characters with ASCII values greater than the size of filename_rev_map[] are valid filename characters. ecryptfs_decode_from_filename() will access kernel memory beyond that array, and ecryptfs_parse_tag_70_packet() will then decrypt those characters. The attacker, using the FNEK of the crafted file, can then re-encrypt the characters to reveal the kernel memory past the end of the filename_rev_map[] array. I expect low security impact since this array is statically allocated in the text area, and the amount of memory past the array that is accessible is limited by the largest possible ASCII filename character. This patch solves the issue reported by mhalcrow but with an implementation suggested by Linus to simply extend the length of filename_rev_map[] to 256. Characters greater than 0x7A are mapped to 0x00, which is how invalid characters less than 0x7A were previously being handled. Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Reported-by: Michael Halcrow <mhalcrow@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09eCryptfs: Flush file in vma closeTyler Hicks
commit 32001d6fe9ac6b0423e674a3093aa56740849f3b upstream. Dirty pages weren't being written back when an mmap'ed eCryptfs file was closed before the mapping was unmapped. Since f_ops->flush() is not called by the munmap() path, the lower file was simply being released. This patch flushes the eCryptfs file in the vm_ops->close() path. https://launchpad.net/bugs/870326 Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-09eCryptfs: Prevent file create race conditionTyler Hicks
commit b59db43ad4434519feb338eacb01d77eb50825c5 upstream. The file creation path prematurely called d_instantiate() and unlock_new_inode() before the eCryptfs inode info was fully allocated and initialized and before the eCryptfs metadata was written to the lower file. This could result in race conditions in subsequent file and inode operations leading to unexpected error conditions or a null pointer dereference while attempting to use the unallocated memory. https://launchpad.net/bugs/813146 Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-30HACK merge fixups for compileDan Willemsen
Rebase-Id: Rbc628711479b187a90437bea94776066c7a58b54
2011-11-30fs: fix compilation error for yaffs2Mursalin Akon
The update is based on 2.6.36 code line. It fixes compilation error. Bug 877219 Signed-off-by: Mursalin Akon <makon@nvidia.com> Change-Id: Ic889bd9a62135dc19531c0b33f13cb4eafaa9104 Reviewed-on: http://git-master/r/52464 Reviewed-by: Allen Martin <amartin@nvidia.com> Rebase-Id: Rdf6f6d37c35a43a2834b62e12b7daa7e166fbe78
2011-11-30Merge branch 'korg-android-tegra-3.1' into after-upstream-androidDan Willemsen
Conflicts: arch/arm/mach-tegra/Kconfig arch/arm/mach-tegra/board-ventana.c drivers/misc/Kconfig drivers/video/tegra/dc/hdmi.c Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
2011-11-30ARM: report present cpus instead of online CPUsJon Mayo
For machines that use hotplug to control special power domains, report present cpus in /proc/cpuinfo and /proc/stat instead of the standard of reporting online. This break hotplug behavior on all other platforms. The option is enabled using CONFIG_REPORT_PRESENT_CPUS Bug 849167 Original-Change-Id: Ib04bb73634b3a8c99592b1ac13eb471a8ecfd0c5 Reviewed-on: http://git-master/r/37732 Reviewed-by: Jonathan Mayo <jmayo@nvidia.com> Tested-by: Jonathan Mayo <jmayo@nvidia.com> Reviewed-by: Aleksandr Frid <afrid@nvidia.com> Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Rebase-Id: R746f854d51a4b8f15774a547a3219acf7576bf6e
2011-11-30fs: ext4: Fix computation of inodes per block groupColin Cross
857ac889cce8a486d47874db4d2f9620e7e9e5de (ext4: add interface to advertise ext4 features in sysfs) added an error check that exposes a bug in the computation of sbi->s_itb_per_group. If the number of inodes per group is not a multiple of the number of inodes per block, Original-Change-Id: I8c60817dbb6feb43535b567ec7ea5ee0af709c37 Signed-off-by: Colin Cross <ccross@android.com> (cherry picked from commit 8703a0ccb0135ae0de0d7011f29eeb6dc1caa486) Rebase-Id: R7fc03850010d565447bb8702710040f112705738
2011-11-30fs: partitions: efi: Add force_gpt_sector parameterColin Cross
force_gpt_sector=<sector> causes the GPT partition search to look at the specified sector for a valid GPT header if the GPT is not found at the beginning or the end of the block device. Change-Id: I9b5f85ce24719c0538d42ec5a94344c7f6556b2b Signed-off-by: Colin Cross <ccross@android.com>
2011-11-30fuse: Freeze client on suspend when request sent to userspaceTodd Poynor
Suspend attempts can abort when the FUSE daemon is already frozen and a client is waiting uninterruptibly for a response, causing freezing of tasks to fail. Use the freeze-friendly wait API, but disregard other signals. Change-Id: Icefb7e4bbc718ccb76bf3c04daaa5eeea7e0e63c Signed-off-by: Todd Poynor <toddpoynor@google.com>
2011-11-30block: genhd: Add partition name to the partition info uevent callbackBrian Swetland
For disk and partition devices, add new uevent parameter: PARTNAME specifices the partition name of a partition device Signed-off-by: Dima Zavin <dima@android.com>
2011-11-30fs: partitions: Fix warnings in fs/partitions/check.cColin Cross
Change-Id: I4398ace0c55d4833b1fcbb7a4e71ab8f0b1b044a Signed-off-by: Colin Cross <ccross@android.com>
2011-11-30block: genhd: Add disk/partition specific uevent callbacks for partition infoSan Mehat
For disk devices, a new uevent parameter 'NPARTS' specifies the number of partitions detected by the kernel. Partition devices get 'PARTN' which specifies the partitions index in the table. Signed-off-by: San Mehat <san@google.com>
2011-11-30proc: smaps: Allow smaps access for CAP_SYS_RESOURCESan Mehat
Signed-off-by: San Mehat <san@google.com>
2011-11-30fs: yaffs: don't force YAFFS_TRACE_ALWAYS for all trace levelsDima Zavin
Change-Id: I9ddc676382d26aef7f12145d412fe670cb486317 Signed-off-by: Dima Zavin <dima@android.com>
2011-11-30fs: yaffs: Import yaffs from Thu Dec 23 13:31:37 2010 +1300Arve Hjønnevåg
commit ddf33fed15c2376bfb602d62dd018c63fce60df8 Author: Timothy Manning <tfhmanning@gmail.com> Date: Thu Dec 23 13:31:37 2010 +1300 yaffs updated direct/timothy_tests/quick_tests Signed-off-by: Timothy Manning <tfhmanning@gmail.com> Change-Id: I5bbe5a05277bdf8a6fe188bbe4c77725b3fa2aae Signed-off-by: Dima Zavin <dima@android.com>
2011-11-30fs: block_dump: Don't display inode changes if block_dump < 2San Mehat
Signed-off-by: San Mehat <san@android.com>
2011-11-30Grants system server access to /proc/<pid>/oom_adj for Android applications.Mike Chan
Signed-off-by: Brian Swetland <swetland@google.com>
2011-11-30FAT: Add new ioctl VFAT_IOCTL_GET_VOLUME_ID for reading the volume ID.Mike Lockwood
Signed-off-by: Brian Swetland <swetland@google.com>
2011-11-26vmscan: fix shrinker callback bug in fs/super.cMikulas Patocka
commit 09f363c7363eb10cfb4b82094bd7064e5608258b upstream. The callback must not return -1 when nr_to_scan is zero. Fix the bug in fs/super.c and add this requirement to the callback specification. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-26nfs: when attempting to open a directory, fall back on normal lookup (try #5)Jeff Layton
commit 1788ea6e3b2a58cf4fb00206e362d9caff8d86a7 upstream. commit d953126 changed how nfs_atomic_lookup handles an -EISDIR return from an OPEN call. Prior to that patch, that caused the client to fall back to doing a normal lookup. When that patch went in, the code began returning that error to userspace. The d_revalidate codepath however never had the corresponding change, so it was still possible to end up with a NULL ctx->state pointer after that. That patch caused a regression. When we attempt to open a directory that does not have a cached dentry, that open now errors out with EISDIR. If you attempt the same open with a cached dentry, it will succeed. Fix this by reverting the change in nfs_atomic_lookup and allowing attempts to open directories to fall back to a normal lookup Also, add a NFSv4-specific f_ops->open routine that just returns -ENOTDIR. This should never be called if things are working properly, but if it ever is, then the dprintk may help in debugging. To facilitate this, a new file_operations field is also added to the nfs_rpc_ops struct. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-21hfs: add sanity check for file name lengthDan Carpenter
commit bc5b8a9003132ae44559edd63a1623b7b99dfb68 upstream. On a corrupted file system the ->len field could be wrong leading to a buffer overflow. Reported-and-acked-by: Clement LECIGNE <clement.lecigne@netasq.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-11VFS: we need to set LOOKUP_JUMPED on mountpoint crossingAl Viro
commit a3fbbde70a0cec017f2431e8f8de208708c76acc upstream. Mountpoint crossing is similar to following procfs symlinks - we do not get ->d_revalidate() called for dentry we have arrived at, with unpleasant consequences for NFS4. Simple way to reproduce the problem in mainline: cat >/tmp/a.c <<'EOF' #include <unistd.h> #include <fcntl.h> #include <stdio.h> main() { struct flock fl = {.l_type = F_RDLCK, .l_whence = SEEK_SET, .l_len = 1}; if (fcntl(0, F_SETLK, &fl)) perror("setlk"); } EOF cc /tmp/a.c -o /tmp/test then on nfs4: mount --bind file1 file2 /tmp/test < file1 # ok /tmp/test < file2 # spews "setlk: No locks available"... What happens is the missing call of ->d_revalidate() after mountpoint crossing and that's where NFS4 would issue OPEN request to server. The fix is simple - treat mountpoint crossing the same way we deal with following procfs-style symlinks. I.e. set LOOKUP_JUMPED... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-11VFS: fix statfs() automounter semantics regressionDan McGee
commit 5c8a0fbba543d9428a486f0d1282bbcf3cf1d95a upstream. No one in their right mind would expect statfs() to not work on a automounter managed mount point. Fix it. [ I'm not sure about the "no one in their right mind" part. It's not mounted, and you didn't ask for it to be mounted. But nobody will really care, and this probably makes it match previous semantics, so.. - Linus ] This mirrors the fix made to the quota code in 815d405ceff0d69646. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-11block: make gendisk hold a reference to its queueTejun Heo
commit f992ae801a7dec34a4ed99a6598bbbbfb82af4fb upstream. The following command sequence triggers an oops. # mount /dev/sdb1 /mnt # echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/delete # umount /mnt general protection fault: 0000 [#1] PREEMPT SMP CPU 2 Modules linked in: Pid: 791, comm: umount Not tainted 3.1.0-rc3-work+ #8 Bochs Bochs RIP: 0010:[<ffffffff810d0879>] [<ffffffff810d0879>] __lock_acquire+0x389/0x1d60 ... Call Trace: [<ffffffff810d2845>] lock_acquire+0x95/0x140 [<ffffffff81aed87b>] _raw_spin_lock+0x3b/0x50 [<ffffffff811573bc>] bdi_lock_two+0x5c/0x70 [<ffffffff811c2f6c>] bdev_inode_switch_bdi+0x4c/0xf0 [<ffffffff811c3fcb>] __blkdev_put+0x11b/0x1d0 [<ffffffff811c4010>] __blkdev_put+0x160/0x1d0 [<ffffffff811c40df>] blkdev_put+0x5f/0x190 [<ffffffff8118f18d>] kill_block_super+0x4d/0x80 [<ffffffff8118f4a5>] deactivate_locked_super+0x45/0x70 [<ffffffff8119003a>] deactivate_super+0x4a/0x70 [<ffffffff811ac4ad>] mntput_no_expire+0xed/0x130 [<ffffffff811acf2e>] sys_umount+0x7e/0x3a0 [<ffffffff81aeeeab>] system_call_fastpath+0x16/0x1b This is because bdev holds on to disk but disk doesn't pin the associated queue. If a SCSI device is removed while the device is still open, the sdev puts the base reference to the queue on release. When the bdev is finally released, the associated queue is already gone along with the bdi and bdev_inode_switch_bdi() ends up dereferencing already freed bdi. Even if it were not for this bug, disk not holding onto the associated queue is very unusual and error-prone. Fix it by making add_disk() take an extra reference to its queue and put it on disk_release() and ensuring that disk and its fops owner are put in that order after all accesses to the disk and queue are complete. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-11ext4: fix race in xattr block allocation pathEric Sandeen
commit 6d6a435190bdf2e04c9465cde5bdc3ac68cf11a4 upstream. Ceph users reported that when using Ceph on ext4, the filesystem would often become corrupted, containing inodes with incorrect i_blocks counters. I managed to reproduce this with a very hacked-up "streamtest" binary from the Ceph tree. Ceph is doing a lot of xattr writes, to out-of-inode blocks. There is also another thread which does sync_file_range and close, of the same files. The problem appears to happen due to this race: sync/flush thread xattr-set thread ----------------- ---------------- do_writepages ext4_xattr_set ext4_da_writepages ext4_xattr_set_handle mpage_da_map_blocks ext4_xattr_block_set set DELALLOC_RESERVE ext4_new_meta_blocks ext4_mb_new_blocks if (!i_delalloc_reserved_flag) vfs_dq_alloc_block ext4_get_blocks down_write(i_data_sem) set i_delalloc_reserved_flag ... up_write(i_data_sem) if (i_delalloc_reserved_flag) vfs_dq_alloc_block_nofail In other words, the sync/flush thread pops in and sets i_delalloc_reserved_flag on the inode, which makes the xattr thread think that it's in a delalloc path in ext4_new_meta_blocks(), and add the block for a second time, after already having added it once in the !i_delalloc_reserved_flag case in ext4_mb_new_blocks The real problem is that we shouldn't be using the DELALLOC_RESERVED state flag, and instead we should be passing EXT4_GET_BLOCKS_DELALLOC_RESERVE down to ext4_map_blocks() instead of using an inode state flag. We'll fix this for now with using i_data_sem to prevent this race, but this is really not the right way to fix things. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>