This is the 4.9.144 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlwLs3sACgkQONu9yGCS aT69Tw//a/lkIfcFzpWkUtWGCViTtNXJxFv1zrkjgU1ldaZa13gJYHapqT1GAXic rxI7kvltiacSUmd+hjqlLFV8tXevaQgIZ2X7jkWGvwSX2TO53ANajuWbQ3yjw/uI 6kUymLwhTLbxyW7xEsQAAUXvyLDuZEvYT10yL5oeAonSR7q15H48EULXQbEDvsvs Pqsbnta8yToTyjcZGETBuIwXlV3yA6TyZKb7GPvMwYvZoSwHEWcVHN6heBMTTogb Q6TyVKxI9ffhc+Ygodw0Aon/JLpw/gVMuuwKv7MEXR/UdlIu/fhJdgdtYN1HpGMP BYuCUiHnh8ji4qFylfcvTOBf1/1PUuxPJct4B1EYz86UxA/rFCJg6I/qvhPNSq2z jaZWVWKAU0OY+kgDkzK33thBca786ZC0SkrynqVKt7D9eDvv5uvxdSLxbxpqqbEf EOQyJcrxtKyW9HVEpw+lxUSBp+ZCz7L2RJ6L0wknikeOV65N657zZleyXRyUggLC skTlS4MCXSqvjizEm6yM2+UpFqEa6AG5xg1kfrRS0IN6Q0a2hEMx7zwJmSdN1ABl w9hHaUM1Bwh9o6Z6SSzZMgkW83EN9khejpJWt+/0sSkhBA8kfgsTZYt5wbeSqBSj c4v2aRAs4FeuigI1ibGhhzVkrESBE97vcTUnooGF0SNgpgS03OE= =lEsp -----END PGP SIGNATURE----- Merge 4.9.144 into android-4.9 Changes in 4.9.144 Kbuild: suppress packed-not-aligned warning for default setting only disable stringop truncation warnings for now test_hexdump: use memcpy instead of strncpy kobject: Replace strncpy with memcpy unifdef: use memcpy instead of strncpy kernfs: Replace strncpy with memcpy ip_tunnel: Fix name string concatenate in __ip_tunnel_create() drm: gma500: fix logic error scsi: bfa: convert to strlcpy/strlcat staging: rts5208: fix gcc-8 logic error warning kdb: use memmove instead of overlapping memcpy x86/power/64: Use char arrays for asm function names iser: set sector for ambiguous mr status errors uprobes: Fix handle_swbp() vs. unregister() + register() race once more MIPS: ralink: Fix mt7620 nd_sd pinmux mips: fix mips_get_syscall_arg o32 check IB/mlx5: Avoid load failure due to unknown link width drm/ast: Fix incorrect free on ioregs drm: set is_master to 0 upon drm_new_set_master() failure scsi: scsi_devinfo: cleanly zero-pad devinfo strings ALSA: trident: Suppress gcc string warning scsi: csiostor: Avoid content leaks and casts kgdboc: Fix restrict error kgdboc: Fix warning with module build binder: fix proc->files use-after-free svm: Add mutex_lock to protect apic_access_page_done on AMD systems drm/mediatek: fix OF sibling-node lookup Input: xpad - quirk all PDP Xbox One gamepads Input: matrix_keypad - check for errors from of_get_named_gpio() Input: elan_i2c - add ELAN0620 to the ACPI table Input: elan_i2c - add ACPI ID for Lenovo IdeaPad 330-15ARR Input: elan_i2c - add support for ELAN0621 touchpad btrfs: Always try all copies when reading extent buffers Btrfs: fix use-after-free when dumping free space ARC: change defconfig defaults to ARCv2 arc: [devboards] Add support of NFSv3 ACL udf: Allow mounting volumes with incorrect identification strings reset: make optional functions really optional reset: core: fix reset_control_put reset: fix optional reset_control_get stubs to return NULL reset: add exported __reset_control_get, return NULL if optional reset: make device_reset_optional() really optional reset: remove remaining WARN_ON() in <linux/reset.h> mm: cleancache: fix corruption on missed inode invalidation usb: gadget: dummy: fix nonsensical comparisons net: qed: use correct strncpy() size tipc: use destination length for copy string libceph: drop len argument of *verify_authorizer_reply() libceph: no need to drop con->mutex for ->get_authorizer() libceph: store ceph_auth_handshake pointer in ceph_connection libceph: factor out __prepare_write_connect() libceph: factor out __ceph_x_decrypt() libceph: factor out encrypt_authorizer() libceph: add authorizer challenge libceph: implement CEPHX_V2 calculation mode libceph: weaken sizeof check in ceph_x_verify_authorizer_reply() libceph: check authorizer reply/challenge length before reading bpf/verifier: Add spi variable to check_stack_write() bpf/verifier: Pass instruction index to check_mem_access() and check_xadd() bpf: Prevent memory disambiguation attack wil6210: missing length check in wmi_set_ie mm/hugetlb.c: don't call region_abort if region_chg fails hugetlbfs: fix offset overflow in hugetlbfs mmap hugetlbfs: check for pgoff value overflow btrfs: validate type when reading a chunk btrfs: Verify that every chunk has corresponding block group at mount time btrfs: Refactor check_leaf function for later expansion btrfs: Check if item pointer overlaps with the item itself btrfs: Add sanity check for EXTENT_DATA when reading out leaf btrfs: Add checker for EXTENT_CSUM btrfs: Move leaf and node validation checker to tree-checker.c btrfs: struct-funcs, constify readers btrfs: tree-checker: Enhance btrfs_check_node output btrfs: tree-checker: Fix false panic for sanity test btrfs: tree-checker: Add checker for dir item btrfs: tree-checker: use %zu format string for size_t btrfs: tree-check: reduce stack consumption in check_dir_item btrfs: tree-checker: Verify block_group_item btrfs: tree-checker: Detect invalid and empty essential trees btrfs: Check that each block group has corresponding chunk at mount time btrfs: tree-checker: Check level for leaves and nodes btrfs: tree-checker: Fix misleading group system information f2fs: fix a panic caused by NULL flush_cmd_control f2fs: fix race condition in between free nid allocator/initializer f2fs: detect wrong layout f2fs: return error during fill_super f2fs: check blkaddr more accuratly before issue a bio f2fs: sanity check on sit entry f2fs: enhance sanity_check_raw_super() to avoid potential overflow f2fs: clean up with is_valid_blkaddr() f2fs: introduce and spread verify_blkaddr f2fs: fix to do sanity check with secs_per_zone f2fs: fix to do sanity check with user_block_count f2fs: Add sanity_check_inode() function f2fs: fix to do sanity check with node footer and iblocks f2fs: fix to do sanity check with block address in main area f2fs: fix missing up_read f2fs: fix to do sanity check with block address in main area v2 f2fs: free meta pages if sanity check for ckpt is failed f2fs: fix to do sanity check with cp_pack_start_sum xfs: don't fail when converting shortform attr to long form during ATTR_REPLACE hugetlbfs: fix bug in pgoff overflow checking Linux 4.9.144 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
860c8b8931
86 changed files with 1728 additions and 690 deletions
5
Makefile
5
Makefile
|
@ -1,6 +1,6 @@
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 9
|
PATCHLEVEL = 9
|
||||||
SUBLEVEL = 143
|
SUBLEVEL = 144
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Roaring Lionus
|
NAME = Roaring Lionus
|
||||||
|
|
||||||
|
@ -869,6 +869,9 @@ KBUILD_CFLAGS += $(call cc-option,-Wdeclaration-after-statement,)
|
||||||
# disable pointer signed / unsigned warnings in gcc 4.0
|
# disable pointer signed / unsigned warnings in gcc 4.0
|
||||||
KBUILD_CFLAGS += $(call cc-disable-warning, pointer-sign)
|
KBUILD_CFLAGS += $(call cc-disable-warning, pointer-sign)
|
||||||
|
|
||||||
|
# disable stringop warnings in gcc 8+
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
|
||||||
|
|
||||||
# disable invalid "can't wrap" optimizations for signed / pointers
|
# disable invalid "can't wrap" optimizations for signed / pointers
|
||||||
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
|
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
|
||||||
|
|
||||||
|
|
|
@ -105,7 +105,7 @@ endmenu
|
||||||
|
|
||||||
choice
|
choice
|
||||||
prompt "ARC Instruction Set"
|
prompt "ARC Instruction Set"
|
||||||
default ISA_ARCOMPACT
|
default ISA_ARCV2
|
||||||
|
|
||||||
config ISA_ARCOMPACT
|
config ISA_ARCOMPACT
|
||||||
bool "ARCompact ISA"
|
bool "ARCompact ISA"
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
|
|
||||||
UTS_MACHINE := arc
|
UTS_MACHINE := arc
|
||||||
|
|
||||||
KBUILD_DEFCONFIG := nsim_700_defconfig
|
KBUILD_DEFCONFIG := nsim_hs_defconfig
|
||||||
|
|
||||||
cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
|
cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
|
||||||
cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7
|
cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7
|
||||||
|
|
|
@ -15,6 +15,7 @@ CONFIG_PERF_EVENTS=y
|
||||||
# CONFIG_VM_EVENT_COUNTERS is not set
|
# CONFIG_VM_EVENT_COUNTERS is not set
|
||||||
# CONFIG_SLUB_DEBUG is not set
|
# CONFIG_SLUB_DEBUG is not set
|
||||||
# CONFIG_COMPAT_BRK is not set
|
# CONFIG_COMPAT_BRK is not set
|
||||||
|
CONFIG_ISA_ARCOMPACT=y
|
||||||
CONFIG_MODULES=y
|
CONFIG_MODULES=y
|
||||||
CONFIG_MODULE_FORCE_LOAD=y
|
CONFIG_MODULE_FORCE_LOAD=y
|
||||||
CONFIG_MODULE_UNLOAD=y
|
CONFIG_MODULE_UNLOAD=y
|
||||||
|
@ -96,6 +97,7 @@ CONFIG_VFAT_FS=y
|
||||||
CONFIG_NTFS_FS=y
|
CONFIG_NTFS_FS=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_NLS_CODEPAGE_437=y
|
CONFIG_NLS_CODEPAGE_437=y
|
||||||
CONFIG_NLS_ISO8859_1=y
|
CONFIG_NLS_ISO8859_1=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -97,6 +97,7 @@ CONFIG_VFAT_FS=y
|
||||||
CONFIG_NTFS_FS=y
|
CONFIG_NTFS_FS=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_NLS_CODEPAGE_437=y
|
CONFIG_NLS_CODEPAGE_437=y
|
||||||
CONFIG_NLS_ISO8859_1=y
|
CONFIG_NLS_ISO8859_1=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -98,6 +98,7 @@ CONFIG_VFAT_FS=y
|
||||||
CONFIG_NTFS_FS=y
|
CONFIG_NTFS_FS=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_NLS_CODEPAGE_437=y
|
CONFIG_NLS_CODEPAGE_437=y
|
||||||
CONFIG_NLS_ISO8859_1=y
|
CONFIG_NLS_ISO8859_1=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -15,6 +15,7 @@ CONFIG_SYSCTL_SYSCALL=y
|
||||||
CONFIG_EMBEDDED=y
|
CONFIG_EMBEDDED=y
|
||||||
CONFIG_PERF_EVENTS=y
|
CONFIG_PERF_EVENTS=y
|
||||||
# CONFIG_COMPAT_BRK is not set
|
# CONFIG_COMPAT_BRK is not set
|
||||||
|
CONFIG_ISA_ARCOMPACT=y
|
||||||
CONFIG_KPROBES=y
|
CONFIG_KPROBES=y
|
||||||
CONFIG_MODULES=y
|
CONFIG_MODULES=y
|
||||||
CONFIG_MODULE_FORCE_LOAD=y
|
CONFIG_MODULE_FORCE_LOAD=y
|
||||||
|
@ -75,6 +76,7 @@ CONFIG_PROC_KCORE=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
# CONFIG_MISC_FILESYSTEMS is not set
|
# CONFIG_MISC_FILESYSTEMS is not set
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_ROOT_NFS=y
|
CONFIG_ROOT_NFS=y
|
||||||
CONFIG_DEBUG_INFO=y
|
CONFIG_DEBUG_INFO=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -16,6 +16,7 @@ CONFIG_EMBEDDED=y
|
||||||
CONFIG_PERF_EVENTS=y
|
CONFIG_PERF_EVENTS=y
|
||||||
# CONFIG_SLUB_DEBUG is not set
|
# CONFIG_SLUB_DEBUG is not set
|
||||||
# CONFIG_COMPAT_BRK is not set
|
# CONFIG_COMPAT_BRK is not set
|
||||||
|
CONFIG_ISA_ARCOMPACT=y
|
||||||
CONFIG_KPROBES=y
|
CONFIG_KPROBES=y
|
||||||
CONFIG_MODULES=y
|
CONFIG_MODULES=y
|
||||||
# CONFIG_LBDAF is not set
|
# CONFIG_LBDAF is not set
|
||||||
|
|
|
@ -16,6 +16,7 @@ CONFIG_EMBEDDED=y
|
||||||
CONFIG_PERF_EVENTS=y
|
CONFIG_PERF_EVENTS=y
|
||||||
# CONFIG_SLUB_DEBUG is not set
|
# CONFIG_SLUB_DEBUG is not set
|
||||||
# CONFIG_COMPAT_BRK is not set
|
# CONFIG_COMPAT_BRK is not set
|
||||||
|
CONFIG_ISA_ARCOMPACT=y
|
||||||
CONFIG_KPROBES=y
|
CONFIG_KPROBES=y
|
||||||
CONFIG_MODULES=y
|
CONFIG_MODULES=y
|
||||||
# CONFIG_LBDAF is not set
|
# CONFIG_LBDAF is not set
|
||||||
|
@ -70,5 +71,6 @@ CONFIG_EXT2_FS_XATTR=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
# CONFIG_MISC_FILESYSTEMS is not set
|
# CONFIG_MISC_FILESYSTEMS is not set
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
# CONFIG_ENABLE_MUST_CHECK is not set
|
# CONFIG_ENABLE_MUST_CHECK is not set
|
||||||
|
|
|
@ -69,5 +69,6 @@ CONFIG_EXT2_FS_XATTR=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
# CONFIG_MISC_FILESYSTEMS is not set
|
# CONFIG_MISC_FILESYSTEMS is not set
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
# CONFIG_ENABLE_MUST_CHECK is not set
|
# CONFIG_ENABLE_MUST_CHECK is not set
|
||||||
|
|
|
@ -80,6 +80,7 @@ CONFIG_EXT2_FS_XATTR=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
# CONFIG_MISC_FILESYSTEMS is not set
|
# CONFIG_MISC_FILESYSTEMS is not set
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
# CONFIG_ENABLE_MUST_CHECK is not set
|
# CONFIG_ENABLE_MUST_CHECK is not set
|
||||||
CONFIG_FTRACE=y
|
CONFIG_FTRACE=y
|
||||||
|
|
|
@ -19,6 +19,7 @@ CONFIG_KALLSYMS_ALL=y
|
||||||
# CONFIG_AIO is not set
|
# CONFIG_AIO is not set
|
||||||
CONFIG_EMBEDDED=y
|
CONFIG_EMBEDDED=y
|
||||||
# CONFIG_COMPAT_BRK is not set
|
# CONFIG_COMPAT_BRK is not set
|
||||||
|
CONFIG_ISA_ARCOMPACT=y
|
||||||
CONFIG_SLAB=y
|
CONFIG_SLAB=y
|
||||||
CONFIG_MODULES=y
|
CONFIG_MODULES=y
|
||||||
CONFIG_MODULE_FORCE_LOAD=y
|
CONFIG_MODULE_FORCE_LOAD=y
|
||||||
|
|
|
@ -88,6 +88,7 @@ CONFIG_NTFS_FS=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
CONFIG_JFFS2_FS=y
|
CONFIG_JFFS2_FS=y
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_NLS_CODEPAGE_437=y
|
CONFIG_NLS_CODEPAGE_437=y
|
||||||
CONFIG_NLS_ISO8859_1=y
|
CONFIG_NLS_ISO8859_1=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -87,6 +87,7 @@ CONFIG_NTFS_FS=y
|
||||||
CONFIG_TMPFS=y
|
CONFIG_TMPFS=y
|
||||||
CONFIG_JFFS2_FS=y
|
CONFIG_JFFS2_FS=y
|
||||||
CONFIG_NFS_FS=y
|
CONFIG_NFS_FS=y
|
||||||
|
CONFIG_NFS_V3_ACL=y
|
||||||
CONFIG_NLS_CODEPAGE_437=y
|
CONFIG_NLS_CODEPAGE_437=y
|
||||||
CONFIG_NLS_ISO8859_1=y
|
CONFIG_NLS_ISO8859_1=y
|
||||||
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
# CONFIG_ENABLE_WARN_DEPRECATED is not set
|
||||||
|
|
|
@ -51,7 +51,7 @@ static inline unsigned long mips_get_syscall_arg(unsigned long *arg,
|
||||||
#ifdef CONFIG_64BIT
|
#ifdef CONFIG_64BIT
|
||||||
case 4: case 5: case 6: case 7:
|
case 4: case 5: case 6: case 7:
|
||||||
#ifdef CONFIG_MIPS32_O32
|
#ifdef CONFIG_MIPS32_O32
|
||||||
if (test_thread_flag(TIF_32BIT_REGS))
|
if (test_tsk_thread_flag(task, TIF_32BIT_REGS))
|
||||||
return get_user(*arg, (int *)usp + n);
|
return get_user(*arg, (int *)usp + n);
|
||||||
else
|
else
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -81,7 +81,7 @@ static struct rt2880_pmx_func pcie_rst_grp[] = {
|
||||||
};
|
};
|
||||||
static struct rt2880_pmx_func nd_sd_grp[] = {
|
static struct rt2880_pmx_func nd_sd_grp[] = {
|
||||||
FUNC("nand", MT7620_GPIO_MODE_NAND, 45, 15),
|
FUNC("nand", MT7620_GPIO_MODE_NAND, 45, 15),
|
||||||
FUNC("sd", MT7620_GPIO_MODE_SD, 45, 15)
|
FUNC("sd", MT7620_GPIO_MODE_SD, 47, 13)
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct rt2880_pmx_group mt7620a_pinmux_data[] = {
|
static struct rt2880_pmx_group mt7620a_pinmux_data[] = {
|
||||||
|
|
|
@ -42,8 +42,7 @@ struct saved_context {
|
||||||
set_debugreg((thread)->debugreg##register, register)
|
set_debugreg((thread)->debugreg##register, register)
|
||||||
|
|
||||||
/* routines for saving/restoring kernel state */
|
/* routines for saving/restoring kernel state */
|
||||||
extern int acpi_save_state_mem(void);
|
extern char core_restore_code[];
|
||||||
extern char core_restore_code;
|
extern char restore_registers[];
|
||||||
extern char restore_registers;
|
|
||||||
|
|
||||||
#endif /* _ASM_X86_SUSPEND_64_H */
|
#endif /* _ASM_X86_SUSPEND_64_H */
|
||||||
|
|
|
@ -1333,20 +1333,23 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, int index)
|
||||||
static int avic_init_access_page(struct kvm_vcpu *vcpu)
|
static int avic_init_access_page(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
struct kvm *kvm = vcpu->kvm;
|
struct kvm *kvm = vcpu->kvm;
|
||||||
int ret;
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&kvm->slots_lock);
|
||||||
if (kvm->arch.apic_access_page_done)
|
if (kvm->arch.apic_access_page_done)
|
||||||
return 0;
|
goto out;
|
||||||
|
|
||||||
ret = x86_set_memory_region(kvm,
|
ret = __x86_set_memory_region(kvm,
|
||||||
APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
|
APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
|
||||||
APIC_DEFAULT_PHYS_BASE,
|
APIC_DEFAULT_PHYS_BASE,
|
||||||
PAGE_SIZE);
|
PAGE_SIZE);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto out;
|
||||||
|
|
||||||
kvm->arch.apic_access_page_done = true;
|
kvm->arch.apic_access_page_done = true;
|
||||||
return 0;
|
out:
|
||||||
|
mutex_unlock(&kvm->slots_lock);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int avic_init_backing_page(struct kvm_vcpu *vcpu)
|
static int avic_init_backing_page(struct kvm_vcpu *vcpu)
|
||||||
|
|
|
@ -126,7 +126,7 @@ static int relocate_restore_code(void)
|
||||||
if (!relocated_restore_code)
|
if (!relocated_restore_code)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
memcpy((void *)relocated_restore_code, &core_restore_code, PAGE_SIZE);
|
memcpy((void *)relocated_restore_code, core_restore_code, PAGE_SIZE);
|
||||||
|
|
||||||
/* Make the page containing the relocated code executable */
|
/* Make the page containing the relocated code executable */
|
||||||
pgd = (pgd_t *)__va(read_cr3()) + pgd_index(relocated_restore_code);
|
pgd = (pgd_t *)__va(read_cr3()) + pgd_index(relocated_restore_code);
|
||||||
|
@ -197,8 +197,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
|
||||||
|
|
||||||
if (max_size < sizeof(struct restore_data_record))
|
if (max_size < sizeof(struct restore_data_record))
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
rdr->jump_address = (unsigned long)&restore_registers;
|
rdr->jump_address = (unsigned long)restore_registers;
|
||||||
rdr->jump_address_phys = __pa_symbol(&restore_registers);
|
rdr->jump_address_phys = __pa_symbol(restore_registers);
|
||||||
rdr->cr3 = restore_cr3;
|
rdr->cr3 = restore_cr3;
|
||||||
rdr->magic = RESTORE_MAGIC;
|
rdr->magic = RESTORE_MAGIC;
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -556,7 +556,8 @@ int ast_driver_unload(struct drm_device *dev)
|
||||||
drm_mode_config_cleanup(dev);
|
drm_mode_config_cleanup(dev);
|
||||||
|
|
||||||
ast_mm_fini(ast);
|
ast_mm_fini(ast);
|
||||||
pci_iounmap(dev->pdev, ast->ioregs);
|
if (ast->ioregs != ast->regs + AST_IO_MM_OFFSET)
|
||||||
|
pci_iounmap(dev->pdev, ast->ioregs);
|
||||||
pci_iounmap(dev->pdev, ast->regs);
|
pci_iounmap(dev->pdev, ast->regs);
|
||||||
kfree(ast);
|
kfree(ast);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -133,6 +133,7 @@ static int drm_new_set_master(struct drm_device *dev, struct drm_file *fpriv)
|
||||||
|
|
||||||
lockdep_assert_held_once(&dev->master_mutex);
|
lockdep_assert_held_once(&dev->master_mutex);
|
||||||
|
|
||||||
|
WARN_ON(fpriv->is_master);
|
||||||
old_master = fpriv->master;
|
old_master = fpriv->master;
|
||||||
fpriv->master = drm_master_create(dev);
|
fpriv->master = drm_master_create(dev);
|
||||||
if (!fpriv->master) {
|
if (!fpriv->master) {
|
||||||
|
@ -161,6 +162,7 @@ out_err:
|
||||||
/* drop references and restore old master on failure */
|
/* drop references and restore old master on failure */
|
||||||
drm_master_put(&fpriv->master);
|
drm_master_put(&fpriv->master);
|
||||||
fpriv->master = old_master;
|
fpriv->master = old_master;
|
||||||
|
fpriv->is_master = 0;
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
|
@ -99,7 +99,7 @@ void mdfldWaitForPipeEnable(struct drm_device *dev, int pipe)
|
||||||
/* Wait for for the pipe enable to take effect. */
|
/* Wait for for the pipe enable to take effect. */
|
||||||
for (count = 0; count < COUNT_MAX; count++) {
|
for (count = 0; count < COUNT_MAX; count++) {
|
||||||
temp = REG_READ(map->conf);
|
temp = REG_READ(map->conf);
|
||||||
if ((temp & PIPEACONF_PIPE_STATE) == 1)
|
if (temp & PIPEACONF_PIPE_STATE)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1446,8 +1446,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* The CEC module handles HDMI hotplug detection */
|
/* The CEC module handles HDMI hotplug detection */
|
||||||
cec_np = of_find_compatible_node(np->parent, NULL,
|
cec_np = of_get_compatible_child(np->parent, "mediatek,mt8173-cec");
|
||||||
"mediatek,mt8173-cec");
|
|
||||||
if (!cec_np) {
|
if (!cec_np) {
|
||||||
dev_err(dev, "Failed to find CEC node\n");
|
dev_err(dev, "Failed to find CEC node\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -1457,8 +1456,10 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
|
||||||
if (!cec_pdev) {
|
if (!cec_pdev) {
|
||||||
dev_err(hdmi->dev, "Waiting for CEC device %s\n",
|
dev_err(hdmi->dev, "Waiting for CEC device %s\n",
|
||||||
cec_np->full_name);
|
cec_np->full_name);
|
||||||
|
of_node_put(cec_np);
|
||||||
return -EPROBE_DEFER;
|
return -EPROBE_DEFER;
|
||||||
}
|
}
|
||||||
|
of_node_put(cec_np);
|
||||||
hdmi->cec_dev = &cec_pdev->dev;
|
hdmi->cec_dev = &cec_pdev->dev;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -710,31 +710,26 @@ enum mlx5_ib_width {
|
||||||
MLX5_IB_WIDTH_12X = 1 << 4
|
MLX5_IB_WIDTH_12X = 1 << 4
|
||||||
};
|
};
|
||||||
|
|
||||||
static int translate_active_width(struct ib_device *ibdev, u8 active_width,
|
static void translate_active_width(struct ib_device *ibdev, u8 active_width,
|
||||||
u8 *ib_width)
|
u8 *ib_width)
|
||||||
{
|
{
|
||||||
struct mlx5_ib_dev *dev = to_mdev(ibdev);
|
struct mlx5_ib_dev *dev = to_mdev(ibdev);
|
||||||
int err = 0;
|
|
||||||
|
|
||||||
if (active_width & MLX5_IB_WIDTH_1X) {
|
if (active_width & MLX5_IB_WIDTH_1X)
|
||||||
*ib_width = IB_WIDTH_1X;
|
*ib_width = IB_WIDTH_1X;
|
||||||
} else if (active_width & MLX5_IB_WIDTH_2X) {
|
else if (active_width & MLX5_IB_WIDTH_4X)
|
||||||
mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n",
|
|
||||||
(int)active_width);
|
|
||||||
err = -EINVAL;
|
|
||||||
} else if (active_width & MLX5_IB_WIDTH_4X) {
|
|
||||||
*ib_width = IB_WIDTH_4X;
|
*ib_width = IB_WIDTH_4X;
|
||||||
} else if (active_width & MLX5_IB_WIDTH_8X) {
|
else if (active_width & MLX5_IB_WIDTH_8X)
|
||||||
*ib_width = IB_WIDTH_8X;
|
*ib_width = IB_WIDTH_8X;
|
||||||
} else if (active_width & MLX5_IB_WIDTH_12X) {
|
else if (active_width & MLX5_IB_WIDTH_12X)
|
||||||
*ib_width = IB_WIDTH_12X;
|
*ib_width = IB_WIDTH_12X;
|
||||||
} else {
|
else {
|
||||||
mlx5_ib_dbg(dev, "Invalid active_width %d\n",
|
mlx5_ib_dbg(dev, "Invalid active_width %d, setting width to default value: 4x\n",
|
||||||
(int)active_width);
|
(int)active_width);
|
||||||
err = -EINVAL;
|
*ib_width = IB_WIDTH_4X;
|
||||||
}
|
}
|
||||||
|
|
||||||
return err;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mlx5_mtu_to_ib_mtu(int mtu)
|
static int mlx5_mtu_to_ib_mtu(int mtu)
|
||||||
|
@ -842,10 +837,8 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port,
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
err = translate_active_width(ibdev, ib_link_width_oper,
|
translate_active_width(ibdev, ib_link_width_oper, &props->active_width);
|
||||||
&props->active_width);
|
|
||||||
if (err)
|
|
||||||
goto out;
|
|
||||||
err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port);
|
err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -1110,7 +1110,9 @@ u8 iser_check_task_pi_status(struct iscsi_iser_task *iser_task,
|
||||||
IB_MR_CHECK_SIG_STATUS, &mr_status);
|
IB_MR_CHECK_SIG_STATUS, &mr_status);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
pr_err("ib_check_mr_status failed, ret %d\n", ret);
|
pr_err("ib_check_mr_status failed, ret %d\n", ret);
|
||||||
goto err;
|
/* Not a lot we can do, return ambiguous guard error */
|
||||||
|
*sector = 0;
|
||||||
|
return 0x1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
|
if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
|
||||||
|
@ -1138,9 +1140,6 @@ u8 iser_check_task_pi_status(struct iscsi_iser_task *iser_task,
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
err:
|
|
||||||
/* Not alot we can do here, return ambiguous guard error */
|
|
||||||
return 0x1;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void iser_err_comp(struct ib_wc *wc, const char *type)
|
void iser_err_comp(struct ib_wc *wc, const char *type)
|
||||||
|
|
|
@ -483,18 +483,18 @@ static const u8 xboxone_hori_init[] = {
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This packet is required for some of the PDP pads to start
|
* This packet is required for most (all?) of the PDP pads to start
|
||||||
* sending input reports. These pads include: (0x0e6f:0x02ab),
|
* sending input reports. These pads include: (0x0e6f:0x02ab),
|
||||||
* (0x0e6f:0x02a4).
|
* (0x0e6f:0x02a4), (0x0e6f:0x02a6).
|
||||||
*/
|
*/
|
||||||
static const u8 xboxone_pdp_init1[] = {
|
static const u8 xboxone_pdp_init1[] = {
|
||||||
0x0a, 0x20, 0x00, 0x03, 0x00, 0x01, 0x14
|
0x0a, 0x20, 0x00, 0x03, 0x00, 0x01, 0x14
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This packet is required for some of the PDP pads to start
|
* This packet is required for most (all?) of the PDP pads to start
|
||||||
* sending input reports. These pads include: (0x0e6f:0x02ab),
|
* sending input reports. These pads include: (0x0e6f:0x02ab),
|
||||||
* (0x0e6f:0x02a4).
|
* (0x0e6f:0x02a4), (0x0e6f:0x02a6).
|
||||||
*/
|
*/
|
||||||
static const u8 xboxone_pdp_init2[] = {
|
static const u8 xboxone_pdp_init2[] = {
|
||||||
0x06, 0x20, 0x00, 0x02, 0x01, 0x00
|
0x06, 0x20, 0x00, 0x02, 0x01, 0x00
|
||||||
|
@ -530,12 +530,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x0165, xboxone_hori_init),
|
XBOXONE_INIT_PKT(0x0e6f, 0x0165, xboxone_hori_init),
|
||||||
XBOXONE_INIT_PKT(0x0f0d, 0x0067, xboxone_hori_init),
|
XBOXONE_INIT_PKT(0x0f0d, 0x0067, xboxone_hori_init),
|
||||||
XBOXONE_INIT_PKT(0x0000, 0x0000, xboxone_fw2015_init),
|
XBOXONE_INIT_PKT(0x0000, 0x0000, xboxone_fw2015_init),
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init1),
|
XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init1),
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init2),
|
XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init2),
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init1),
|
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init2),
|
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init1),
|
|
||||||
XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init2),
|
|
||||||
XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
|
XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
|
||||||
XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
|
XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
|
||||||
XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
|
XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
|
||||||
|
|
|
@ -405,7 +405,7 @@ matrix_keypad_parse_dt(struct device *dev)
|
||||||
struct matrix_keypad_platform_data *pdata;
|
struct matrix_keypad_platform_data *pdata;
|
||||||
struct device_node *np = dev->of_node;
|
struct device_node *np = dev->of_node;
|
||||||
unsigned int *gpios;
|
unsigned int *gpios;
|
||||||
int i, nrow, ncol;
|
int ret, i, nrow, ncol;
|
||||||
|
|
||||||
if (!np) {
|
if (!np) {
|
||||||
dev_err(dev, "device lacks DT data\n");
|
dev_err(dev, "device lacks DT data\n");
|
||||||
|
@ -447,12 +447,19 @@ matrix_keypad_parse_dt(struct device *dev)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < pdata->num_row_gpios; i++)
|
for (i = 0; i < nrow; i++) {
|
||||||
gpios[i] = of_get_named_gpio(np, "row-gpios", i);
|
ret = of_get_named_gpio(np, "row-gpios", i);
|
||||||
|
if (ret < 0)
|
||||||
|
return ERR_PTR(ret);
|
||||||
|
gpios[i] = ret;
|
||||||
|
}
|
||||||
|
|
||||||
for (i = 0; i < pdata->num_col_gpios; i++)
|
for (i = 0; i < ncol; i++) {
|
||||||
gpios[pdata->num_row_gpios + i] =
|
ret = of_get_named_gpio(np, "col-gpios", i);
|
||||||
of_get_named_gpio(np, "col-gpios", i);
|
if (ret < 0)
|
||||||
|
return ERR_PTR(ret);
|
||||||
|
gpios[nrow + i] = ret;
|
||||||
|
}
|
||||||
|
|
||||||
pdata->row_gpios = gpios;
|
pdata->row_gpios = gpios;
|
||||||
pdata->col_gpios = &gpios[pdata->num_row_gpios];
|
pdata->col_gpios = &gpios[pdata->num_row_gpios];
|
||||||
|
@ -479,10 +486,8 @@ static int matrix_keypad_probe(struct platform_device *pdev)
|
||||||
pdata = dev_get_platdata(&pdev->dev);
|
pdata = dev_get_platdata(&pdev->dev);
|
||||||
if (!pdata) {
|
if (!pdata) {
|
||||||
pdata = matrix_keypad_parse_dt(&pdev->dev);
|
pdata = matrix_keypad_parse_dt(&pdev->dev);
|
||||||
if (IS_ERR(pdata)) {
|
if (IS_ERR(pdata))
|
||||||
dev_err(&pdev->dev, "no platform data defined\n");
|
|
||||||
return PTR_ERR(pdata);
|
return PTR_ERR(pdata);
|
||||||
}
|
|
||||||
} else if (!pdata->keymap_data) {
|
} else if (!pdata->keymap_data) {
|
||||||
dev_err(&pdev->dev, "no keymap data defined\n");
|
dev_err(&pdev->dev, "no keymap data defined\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
|
@ -1253,6 +1253,9 @@ static const struct acpi_device_id elan_acpi_id[] = {
|
||||||
{ "ELAN0618", 0 },
|
{ "ELAN0618", 0 },
|
||||||
{ "ELAN061C", 0 },
|
{ "ELAN061C", 0 },
|
||||||
{ "ELAN061D", 0 },
|
{ "ELAN061D", 0 },
|
||||||
|
{ "ELAN061E", 0 },
|
||||||
|
{ "ELAN0620", 0 },
|
||||||
|
{ "ELAN0621", 0 },
|
||||||
{ "ELAN0622", 0 },
|
{ "ELAN0622", 0 },
|
||||||
{ "ELAN1000", 0 },
|
{ "ELAN1000", 0 },
|
||||||
{ }
|
{ }
|
||||||
|
|
|
@ -3039,10 +3039,10 @@ static u32 qed_grc_dump_big_ram(struct qed_hwfn *p_hwfn,
|
||||||
s_big_ram_defs[big_ram_id].num_of_blocks[dev_data->chip_id];
|
s_big_ram_defs[big_ram_id].num_of_blocks[dev_data->chip_id];
|
||||||
ram_size = total_blocks * BIG_RAM_BLOCK_SIZE_DWORDS;
|
ram_size = total_blocks * BIG_RAM_BLOCK_SIZE_DWORDS;
|
||||||
|
|
||||||
strncpy(type_name, s_big_ram_defs[big_ram_id].instance_name,
|
strscpy(type_name, s_big_ram_defs[big_ram_id].instance_name,
|
||||||
strlen(s_big_ram_defs[big_ram_id].instance_name));
|
sizeof(type_name));
|
||||||
strncpy(mem_name, s_big_ram_defs[big_ram_id].instance_name,
|
strscpy(mem_name, s_big_ram_defs[big_ram_id].instance_name,
|
||||||
strlen(s_big_ram_defs[big_ram_id].instance_name));
|
sizeof(mem_name));
|
||||||
|
|
||||||
/* Dump memory header */
|
/* Dump memory header */
|
||||||
offset += qed_grc_dump_mem_hdr(p_hwfn,
|
offset += qed_grc_dump_mem_hdr(p_hwfn,
|
||||||
|
|
|
@ -1302,8 +1302,14 @@ int wmi_set_ie(struct wil6210_priv *wil, u8 type, u16 ie_len, const void *ie)
|
||||||
};
|
};
|
||||||
int rc;
|
int rc;
|
||||||
u16 len = sizeof(struct wmi_set_appie_cmd) + ie_len;
|
u16 len = sizeof(struct wmi_set_appie_cmd) + ie_len;
|
||||||
struct wmi_set_appie_cmd *cmd = kzalloc(len, GFP_KERNEL);
|
struct wmi_set_appie_cmd *cmd;
|
||||||
|
|
||||||
|
if (len < ie_len) {
|
||||||
|
rc = -EINVAL;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd = kzalloc(len, GFP_KERNEL);
|
||||||
if (!cmd) {
|
if (!cmd) {
|
||||||
rc = -ENOMEM;
|
rc = -ENOMEM;
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -135,11 +135,16 @@ EXPORT_SYMBOL_GPL(devm_reset_controller_register);
|
||||||
* @rstc: reset controller
|
* @rstc: reset controller
|
||||||
*
|
*
|
||||||
* Calling this on a shared reset controller is an error.
|
* Calling this on a shared reset controller is an error.
|
||||||
|
*
|
||||||
|
* If rstc is NULL it is an optional reset and the function will just
|
||||||
|
* return 0.
|
||||||
*/
|
*/
|
||||||
int reset_control_reset(struct reset_control *rstc)
|
int reset_control_reset(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
if (WARN_ON(IS_ERR_OR_NULL(rstc)) ||
|
if (!rstc)
|
||||||
WARN_ON(rstc->shared))
|
return 0;
|
||||||
|
|
||||||
|
if (WARN_ON(IS_ERR(rstc)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (rstc->rcdev->ops->reset)
|
if (rstc->rcdev->ops->reset)
|
||||||
|
@ -159,10 +164,16 @@ EXPORT_SYMBOL_GPL(reset_control_reset);
|
||||||
*
|
*
|
||||||
* For shared reset controls a driver cannot expect the hw's registers and
|
* For shared reset controls a driver cannot expect the hw's registers and
|
||||||
* internal state to be reset, but must be prepared for this to happen.
|
* internal state to be reset, but must be prepared for this to happen.
|
||||||
|
*
|
||||||
|
* If rstc is NULL it is an optional reset and the function will just
|
||||||
|
* return 0.
|
||||||
*/
|
*/
|
||||||
int reset_control_assert(struct reset_control *rstc)
|
int reset_control_assert(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
if (WARN_ON(IS_ERR_OR_NULL(rstc)))
|
if (!rstc)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (WARN_ON(IS_ERR(rstc)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (!rstc->rcdev->ops->assert)
|
if (!rstc->rcdev->ops->assert)
|
||||||
|
@ -185,10 +196,16 @@ EXPORT_SYMBOL_GPL(reset_control_assert);
|
||||||
* @rstc: reset controller
|
* @rstc: reset controller
|
||||||
*
|
*
|
||||||
* After calling this function, the reset is guaranteed to be deasserted.
|
* After calling this function, the reset is guaranteed to be deasserted.
|
||||||
|
*
|
||||||
|
* If rstc is NULL it is an optional reset and the function will just
|
||||||
|
* return 0.
|
||||||
*/
|
*/
|
||||||
int reset_control_deassert(struct reset_control *rstc)
|
int reset_control_deassert(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
if (WARN_ON(IS_ERR_OR_NULL(rstc)))
|
if (!rstc)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (WARN_ON(IS_ERR(rstc)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (!rstc->rcdev->ops->deassert)
|
if (!rstc->rcdev->ops->deassert)
|
||||||
|
@ -206,12 +223,15 @@ EXPORT_SYMBOL_GPL(reset_control_deassert);
|
||||||
/**
|
/**
|
||||||
* reset_control_status - returns a negative errno if not supported, a
|
* reset_control_status - returns a negative errno if not supported, a
|
||||||
* positive value if the reset line is asserted, or zero if the reset
|
* positive value if the reset line is asserted, or zero if the reset
|
||||||
* line is not asserted.
|
* line is not asserted or if the desc is NULL (optional reset).
|
||||||
* @rstc: reset controller
|
* @rstc: reset controller
|
||||||
*/
|
*/
|
||||||
int reset_control_status(struct reset_control *rstc)
|
int reset_control_status(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
if (WARN_ON(IS_ERR_OR_NULL(rstc)))
|
if (!rstc)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (WARN_ON(IS_ERR(rstc)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (rstc->rcdev->ops->status)
|
if (rstc->rcdev->ops->status)
|
||||||
|
@ -221,7 +241,7 @@ int reset_control_status(struct reset_control *rstc)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(reset_control_status);
|
EXPORT_SYMBOL_GPL(reset_control_status);
|
||||||
|
|
||||||
static struct reset_control *__reset_control_get(
|
static struct reset_control *__reset_control_get_internal(
|
||||||
struct reset_controller_dev *rcdev,
|
struct reset_controller_dev *rcdev,
|
||||||
unsigned int index, int shared)
|
unsigned int index, int shared)
|
||||||
{
|
{
|
||||||
|
@ -254,7 +274,7 @@ static struct reset_control *__reset_control_get(
|
||||||
return rstc;
|
return rstc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __reset_control_put(struct reset_control *rstc)
|
static void __reset_control_put_internal(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
lockdep_assert_held(&reset_list_mutex);
|
lockdep_assert_held(&reset_list_mutex);
|
||||||
|
|
||||||
|
@ -268,7 +288,8 @@ static void __reset_control_put(struct reset_control *rstc)
|
||||||
}
|
}
|
||||||
|
|
||||||
struct reset_control *__of_reset_control_get(struct device_node *node,
|
struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||||
const char *id, int index, int shared)
|
const char *id, int index, bool shared,
|
||||||
|
bool optional)
|
||||||
{
|
{
|
||||||
struct reset_control *rstc;
|
struct reset_control *rstc;
|
||||||
struct reset_controller_dev *r, *rcdev;
|
struct reset_controller_dev *r, *rcdev;
|
||||||
|
@ -282,14 +303,18 @@ struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||||
if (id) {
|
if (id) {
|
||||||
index = of_property_match_string(node,
|
index = of_property_match_string(node,
|
||||||
"reset-names", id);
|
"reset-names", id);
|
||||||
|
if (index == -EILSEQ)
|
||||||
|
return ERR_PTR(index);
|
||||||
if (index < 0)
|
if (index < 0)
|
||||||
return ERR_PTR(-ENOENT);
|
return optional ? NULL : ERR_PTR(-ENOENT);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = of_parse_phandle_with_args(node, "resets", "#reset-cells",
|
ret = of_parse_phandle_with_args(node, "resets", "#reset-cells",
|
||||||
index, &args);
|
index, &args);
|
||||||
if (ret)
|
if (ret == -EINVAL)
|
||||||
return ERR_PTR(ret);
|
return ERR_PTR(ret);
|
||||||
|
if (ret)
|
||||||
|
return optional ? NULL : ERR_PTR(ret);
|
||||||
|
|
||||||
mutex_lock(&reset_list_mutex);
|
mutex_lock(&reset_list_mutex);
|
||||||
rcdev = NULL;
|
rcdev = NULL;
|
||||||
|
@ -318,7 +343,7 @@ struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* reset_list_mutex also protects the rcdev's reset_control list */
|
/* reset_list_mutex also protects the rcdev's reset_control list */
|
||||||
rstc = __reset_control_get(rcdev, rstc_id, shared);
|
rstc = __reset_control_get_internal(rcdev, rstc_id, shared);
|
||||||
|
|
||||||
mutex_unlock(&reset_list_mutex);
|
mutex_unlock(&reset_list_mutex);
|
||||||
|
|
||||||
|
@ -326,6 +351,17 @@ struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(__of_reset_control_get);
|
EXPORT_SYMBOL_GPL(__of_reset_control_get);
|
||||||
|
|
||||||
|
struct reset_control *__reset_control_get(struct device *dev, const char *id,
|
||||||
|
int index, bool shared, bool optional)
|
||||||
|
{
|
||||||
|
if (dev->of_node)
|
||||||
|
return __of_reset_control_get(dev->of_node, id, index, shared,
|
||||||
|
optional);
|
||||||
|
|
||||||
|
return optional ? NULL : ERR_PTR(-EINVAL);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__reset_control_get);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* reset_control_put - free the reset controller
|
* reset_control_put - free the reset controller
|
||||||
* @rstc: reset controller
|
* @rstc: reset controller
|
||||||
|
@ -333,11 +369,11 @@ EXPORT_SYMBOL_GPL(__of_reset_control_get);
|
||||||
|
|
||||||
void reset_control_put(struct reset_control *rstc)
|
void reset_control_put(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
if (IS_ERR(rstc))
|
if (IS_ERR_OR_NULL(rstc))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
mutex_lock(&reset_list_mutex);
|
mutex_lock(&reset_list_mutex);
|
||||||
__reset_control_put(rstc);
|
__reset_control_put_internal(rstc);
|
||||||
mutex_unlock(&reset_list_mutex);
|
mutex_unlock(&reset_list_mutex);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(reset_control_put);
|
EXPORT_SYMBOL_GPL(reset_control_put);
|
||||||
|
@ -348,7 +384,8 @@ static void devm_reset_control_release(struct device *dev, void *res)
|
||||||
}
|
}
|
||||||
|
|
||||||
struct reset_control *__devm_reset_control_get(struct device *dev,
|
struct reset_control *__devm_reset_control_get(struct device *dev,
|
||||||
const char *id, int index, int shared)
|
const char *id, int index, bool shared,
|
||||||
|
bool optional)
|
||||||
{
|
{
|
||||||
struct reset_control **ptr, *rstc;
|
struct reset_control **ptr, *rstc;
|
||||||
|
|
||||||
|
@ -357,8 +394,7 @@ struct reset_control *__devm_reset_control_get(struct device *dev,
|
||||||
if (!ptr)
|
if (!ptr)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
rstc = __of_reset_control_get(dev ? dev->of_node : NULL,
|
rstc = __reset_control_get(dev, id, index, shared, optional);
|
||||||
id, index, shared);
|
|
||||||
if (!IS_ERR(rstc)) {
|
if (!IS_ERR(rstc)) {
|
||||||
*ptr = rstc;
|
*ptr = rstc;
|
||||||
devres_add(dev, ptr);
|
devres_add(dev, ptr);
|
||||||
|
@ -374,17 +410,18 @@ EXPORT_SYMBOL_GPL(__devm_reset_control_get);
|
||||||
* device_reset - find reset controller associated with the device
|
* device_reset - find reset controller associated with the device
|
||||||
* and perform reset
|
* and perform reset
|
||||||
* @dev: device to be reset by the controller
|
* @dev: device to be reset by the controller
|
||||||
|
* @optional: whether it is optional to reset the device
|
||||||
*
|
*
|
||||||
* Convenience wrapper for reset_control_get() and reset_control_reset().
|
* Convenience wrapper for __reset_control_get() and reset_control_reset().
|
||||||
* This is useful for the common case of devices with single, dedicated reset
|
* This is useful for the common case of devices with single, dedicated reset
|
||||||
* lines.
|
* lines.
|
||||||
*/
|
*/
|
||||||
int device_reset(struct device *dev)
|
int __device_reset(struct device *dev, bool optional)
|
||||||
{
|
{
|
||||||
struct reset_control *rstc;
|
struct reset_control *rstc;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
rstc = reset_control_get(dev, NULL);
|
rstc = __reset_control_get(dev, NULL, 0, 0, optional);
|
||||||
if (IS_ERR(rstc))
|
if (IS_ERR(rstc))
|
||||||
return PTR_ERR(rstc);
|
return PTR_ERR(rstc);
|
||||||
|
|
||||||
|
@ -394,4 +431,4 @@ int device_reset(struct device *dev)
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(device_reset);
|
EXPORT_SYMBOL_GPL(__device_reset);
|
||||||
|
|
|
@ -1250,8 +1250,8 @@ fc_rspnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id,
|
||||||
memset(rspnid, 0, sizeof(struct fcgs_rspnid_req_s));
|
memset(rspnid, 0, sizeof(struct fcgs_rspnid_req_s));
|
||||||
|
|
||||||
rspnid->dap = s_id;
|
rspnid->dap = s_id;
|
||||||
rspnid->spn_len = (u8) strlen((char *)name);
|
strlcpy(rspnid->spn, name, sizeof(rspnid->spn));
|
||||||
strncpy((char *)rspnid->spn, (char *)name, rspnid->spn_len);
|
rspnid->spn_len = (u8) strlen(rspnid->spn);
|
||||||
|
|
||||||
return sizeof(struct fcgs_rspnid_req_s) + sizeof(struct ct_hdr_s);
|
return sizeof(struct fcgs_rspnid_req_s) + sizeof(struct ct_hdr_s);
|
||||||
}
|
}
|
||||||
|
@ -1271,8 +1271,8 @@ fc_rsnn_nn_build(struct fchs_s *fchs, void *pyld, u32 s_id,
|
||||||
memset(rsnn_nn, 0, sizeof(struct fcgs_rsnn_nn_req_s));
|
memset(rsnn_nn, 0, sizeof(struct fcgs_rsnn_nn_req_s));
|
||||||
|
|
||||||
rsnn_nn->node_name = node_name;
|
rsnn_nn->node_name = node_name;
|
||||||
rsnn_nn->snn_len = (u8) strlen((char *)name);
|
strlcpy(rsnn_nn->snn, name, sizeof(rsnn_nn->snn));
|
||||||
strncpy((char *)rsnn_nn->snn, (char *)name, rsnn_nn->snn_len);
|
rsnn_nn->snn_len = (u8) strlen(rsnn_nn->snn);
|
||||||
|
|
||||||
return sizeof(struct fcgs_rsnn_nn_req_s) + sizeof(struct ct_hdr_s);
|
return sizeof(struct fcgs_rsnn_nn_req_s) + sizeof(struct ct_hdr_s);
|
||||||
}
|
}
|
||||||
|
|
|
@ -832,23 +832,23 @@ bfa_fcs_fabric_psymb_init(struct bfa_fcs_fabric_s *fabric)
|
||||||
bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model);
|
bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model);
|
||||||
|
|
||||||
/* Model name/number */
|
/* Model name/number */
|
||||||
strncpy((char *)&port_cfg->sym_name, model,
|
strlcpy(port_cfg->sym_name.symname, model,
|
||||||
BFA_FCS_PORT_SYMBNAME_MODEL_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
strlcat(port_cfg->sym_name.symname, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* Driver Version */
|
/* Driver Version */
|
||||||
strncat((char *)&port_cfg->sym_name, (char *)driver_info->version,
|
strlcat(port_cfg->sym_name.symname, driver_info->version,
|
||||||
BFA_FCS_PORT_SYMBNAME_VERSION_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
strlcat(port_cfg->sym_name.symname, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* Host machine name */
|
/* Host machine name */
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
(char *)driver_info->host_machine_name,
|
driver_info->host_machine_name,
|
||||||
BFA_FCS_PORT_SYMBNAME_MACHINENAME_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
strlcat(port_cfg->sym_name.symname, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Host OS Info :
|
* Host OS Info :
|
||||||
|
@ -856,24 +856,24 @@ bfa_fcs_fabric_psymb_init(struct bfa_fcs_fabric_s *fabric)
|
||||||
* OS name string and instead copy the entire OS info string (64 bytes).
|
* OS name string and instead copy the entire OS info string (64 bytes).
|
||||||
*/
|
*/
|
||||||
if (driver_info->host_os_patch[0] == '\0') {
|
if (driver_info->host_os_patch[0] == '\0') {
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
(char *)driver_info->host_os_name,
|
driver_info->host_os_name,
|
||||||
BFA_FCS_OS_STR_LEN);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
} else {
|
} else {
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
(char *)driver_info->host_os_name,
|
driver_info->host_os_name,
|
||||||
BFA_FCS_PORT_SYMBNAME_OSINFO_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* Append host OS Patch Info */
|
/* Append host OS Patch Info */
|
||||||
strncat((char *)&port_cfg->sym_name,
|
strlcat(port_cfg->sym_name.symname,
|
||||||
(char *)driver_info->host_os_patch,
|
driver_info->host_os_patch,
|
||||||
BFA_FCS_PORT_SYMBNAME_OSPATCH_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* null terminate */
|
/* null terminate */
|
||||||
|
@ -893,26 +893,26 @@ bfa_fcs_fabric_nsymb_init(struct bfa_fcs_fabric_s *fabric)
|
||||||
bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model);
|
bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model);
|
||||||
|
|
||||||
/* Model name/number */
|
/* Model name/number */
|
||||||
strncpy((char *)&port_cfg->node_sym_name, model,
|
strlcpy(port_cfg->node_sym_name.symname, model,
|
||||||
BFA_FCS_PORT_SYMBNAME_MODEL_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->node_sym_name,
|
strlcat(port_cfg->node_sym_name.symname,
|
||||||
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* Driver Version */
|
/* Driver Version */
|
||||||
strncat((char *)&port_cfg->node_sym_name, (char *)driver_info->version,
|
strlcat(port_cfg->node_sym_name.symname, (char *)driver_info->version,
|
||||||
BFA_FCS_PORT_SYMBNAME_VERSION_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->node_sym_name,
|
strlcat(port_cfg->node_sym_name.symname,
|
||||||
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* Host machine name */
|
/* Host machine name */
|
||||||
strncat((char *)&port_cfg->node_sym_name,
|
strlcat(port_cfg->node_sym_name.symname,
|
||||||
(char *)driver_info->host_machine_name,
|
driver_info->host_machine_name,
|
||||||
BFA_FCS_PORT_SYMBNAME_MACHINENAME_SZ);
|
BFA_SYMNAME_MAXLEN);
|
||||||
strncat((char *)&port_cfg->node_sym_name,
|
strlcat(port_cfg->node_sym_name.symname,
|
||||||
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
BFA_SYMNAME_MAXLEN);
|
||||||
|
|
||||||
/* null terminate */
|
/* null terminate */
|
||||||
port_cfg->node_sym_name.symname[BFA_SYMNAME_MAXLEN - 1] = 0;
|
port_cfg->node_sym_name.symname[BFA_SYMNAME_MAXLEN - 1] = 0;
|
||||||
|
|
|
@ -2631,10 +2631,10 @@ bfa_fcs_fdmi_get_hbaattr(struct bfa_fcs_lport_fdmi_s *fdmi,
|
||||||
bfa_ioc_get_adapter_fw_ver(&port->fcs->bfa->ioc,
|
bfa_ioc_get_adapter_fw_ver(&port->fcs->bfa->ioc,
|
||||||
hba_attr->fw_version);
|
hba_attr->fw_version);
|
||||||
|
|
||||||
strncpy(hba_attr->driver_version, (char *)driver_info->version,
|
strlcpy(hba_attr->driver_version, (char *)driver_info->version,
|
||||||
sizeof(hba_attr->driver_version));
|
sizeof(hba_attr->driver_version));
|
||||||
|
|
||||||
strncpy(hba_attr->os_name, driver_info->host_os_name,
|
strlcpy(hba_attr->os_name, driver_info->host_os_name,
|
||||||
sizeof(hba_attr->os_name));
|
sizeof(hba_attr->os_name));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -2642,23 +2642,23 @@ bfa_fcs_fdmi_get_hbaattr(struct bfa_fcs_lport_fdmi_s *fdmi,
|
||||||
* to the os name along with a separator
|
* to the os name along with a separator
|
||||||
*/
|
*/
|
||||||
if (driver_info->host_os_patch[0] != '\0') {
|
if (driver_info->host_os_patch[0] != '\0') {
|
||||||
strncat(hba_attr->os_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
strlcat(hba_attr->os_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR,
|
||||||
sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR));
|
sizeof(hba_attr->os_name));
|
||||||
strncat(hba_attr->os_name, driver_info->host_os_patch,
|
strlcat(hba_attr->os_name, driver_info->host_os_patch,
|
||||||
sizeof(driver_info->host_os_patch));
|
sizeof(hba_attr->os_name));
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Retrieve the max frame size from the port attr */
|
/* Retrieve the max frame size from the port attr */
|
||||||
bfa_fcs_fdmi_get_portattr(fdmi, &fcs_port_attr);
|
bfa_fcs_fdmi_get_portattr(fdmi, &fcs_port_attr);
|
||||||
hba_attr->max_ct_pyld = fcs_port_attr.max_frm_size;
|
hba_attr->max_ct_pyld = fcs_port_attr.max_frm_size;
|
||||||
|
|
||||||
strncpy(hba_attr->node_sym_name.symname,
|
strlcpy(hba_attr->node_sym_name.symname,
|
||||||
port->port_cfg.node_sym_name.symname, BFA_SYMNAME_MAXLEN);
|
port->port_cfg.node_sym_name.symname, BFA_SYMNAME_MAXLEN);
|
||||||
strcpy(hba_attr->vendor_info, "QLogic");
|
strcpy(hba_attr->vendor_info, "QLogic");
|
||||||
hba_attr->num_ports =
|
hba_attr->num_ports =
|
||||||
cpu_to_be32(bfa_ioc_get_nports(&port->fcs->bfa->ioc));
|
cpu_to_be32(bfa_ioc_get_nports(&port->fcs->bfa->ioc));
|
||||||
hba_attr->fabric_name = port->fabric->lps->pr_nwwn;
|
hba_attr->fabric_name = port->fabric->lps->pr_nwwn;
|
||||||
strncpy(hba_attr->bios_ver, hba_attr->option_rom_ver, BFA_VERSION_LEN);
|
strlcpy(hba_attr->bios_ver, hba_attr->option_rom_ver, BFA_VERSION_LEN);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2725,20 +2725,20 @@ bfa_fcs_fdmi_get_portattr(struct bfa_fcs_lport_fdmi_s *fdmi,
|
||||||
/*
|
/*
|
||||||
* OS device Name
|
* OS device Name
|
||||||
*/
|
*/
|
||||||
strncpy(port_attr->os_device_name, (char *)driver_info->os_device_name,
|
strlcpy(port_attr->os_device_name, driver_info->os_device_name,
|
||||||
sizeof(port_attr->os_device_name));
|
sizeof(port_attr->os_device_name));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Host name
|
* Host name
|
||||||
*/
|
*/
|
||||||
strncpy(port_attr->host_name, (char *)driver_info->host_machine_name,
|
strlcpy(port_attr->host_name, driver_info->host_machine_name,
|
||||||
sizeof(port_attr->host_name));
|
sizeof(port_attr->host_name));
|
||||||
|
|
||||||
port_attr->node_name = bfa_fcs_lport_get_nwwn(port);
|
port_attr->node_name = bfa_fcs_lport_get_nwwn(port);
|
||||||
port_attr->port_name = bfa_fcs_lport_get_pwwn(port);
|
port_attr->port_name = bfa_fcs_lport_get_pwwn(port);
|
||||||
|
|
||||||
strncpy(port_attr->port_sym_name.symname,
|
strlcpy(port_attr->port_sym_name.symname,
|
||||||
(char *)&bfa_fcs_lport_get_psym_name(port), BFA_SYMNAME_MAXLEN);
|
bfa_fcs_lport_get_psym_name(port).symname, BFA_SYMNAME_MAXLEN);
|
||||||
bfa_fcs_lport_get_attr(port, &lport_attr);
|
bfa_fcs_lport_get_attr(port, &lport_attr);
|
||||||
port_attr->port_type = cpu_to_be32(lport_attr.port_type);
|
port_attr->port_type = cpu_to_be32(lport_attr.port_type);
|
||||||
port_attr->scos = pport_attr.cos_supported;
|
port_attr->scos = pport_attr.cos_supported;
|
||||||
|
@ -3218,7 +3218,7 @@ bfa_fcs_lport_ms_gmal_response(void *fcsarg, struct bfa_fcxp_s *fcxp,
|
||||||
rsp_str[gmal_entry->len-1] = 0;
|
rsp_str[gmal_entry->len-1] = 0;
|
||||||
|
|
||||||
/* copy IP Address to fabric */
|
/* copy IP Address to fabric */
|
||||||
strncpy(bfa_fcs_lport_get_fabric_ipaddr(port),
|
strlcpy(bfa_fcs_lport_get_fabric_ipaddr(port),
|
||||||
gmal_entry->ip_addr,
|
gmal_entry->ip_addr,
|
||||||
BFA_FCS_FABRIC_IPADDR_SZ);
|
BFA_FCS_FABRIC_IPADDR_SZ);
|
||||||
break;
|
break;
|
||||||
|
@ -4656,21 +4656,13 @@ bfa_fcs_lport_ns_send_rspn_id(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced)
|
||||||
* to that of the base port.
|
* to that of the base port.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
strncpy((char *)psymbl,
|
strlcpy(symbl,
|
||||||
(char *) &
|
(char *)&(bfa_fcs_lport_get_psym_name
|
||||||
(bfa_fcs_lport_get_psym_name
|
|
||||||
(bfa_fcs_get_base_port(port->fcs))),
|
(bfa_fcs_get_base_port(port->fcs))),
|
||||||
strlen((char *) &
|
sizeof(symbl));
|
||||||
bfa_fcs_lport_get_psym_name(bfa_fcs_get_base_port
|
|
||||||
(port->fcs))));
|
|
||||||
|
|
||||||
/* Ensure we have a null terminating string. */
|
strlcat(symbl, (char *)&(bfa_fcs_lport_get_psym_name(port)),
|
||||||
((char *)psymbl)[strlen((char *) &
|
sizeof(symbl));
|
||||||
bfa_fcs_lport_get_psym_name(bfa_fcs_get_base_port
|
|
||||||
(port->fcs)))] = 0;
|
|
||||||
strncat((char *)psymbl,
|
|
||||||
(char *) &(bfa_fcs_lport_get_psym_name(port)),
|
|
||||||
strlen((char *) &bfa_fcs_lport_get_psym_name(port)));
|
|
||||||
} else {
|
} else {
|
||||||
psymbl = (u8 *) &(bfa_fcs_lport_get_psym_name(port));
|
psymbl = (u8 *) &(bfa_fcs_lport_get_psym_name(port));
|
||||||
}
|
}
|
||||||
|
@ -5162,7 +5154,6 @@ bfa_fcs_lport_ns_util_send_rspn_id(void *cbarg, struct bfa_fcxp_s *fcxp_alloced)
|
||||||
struct fchs_s fchs;
|
struct fchs_s fchs;
|
||||||
struct bfa_fcxp_s *fcxp;
|
struct bfa_fcxp_s *fcxp;
|
||||||
u8 symbl[256];
|
u8 symbl[256];
|
||||||
u8 *psymbl = &symbl[0];
|
|
||||||
int len;
|
int len;
|
||||||
|
|
||||||
/* Avoid sending RSPN in the following states. */
|
/* Avoid sending RSPN in the following states. */
|
||||||
|
@ -5192,22 +5183,17 @@ bfa_fcs_lport_ns_util_send_rspn_id(void *cbarg, struct bfa_fcxp_s *fcxp_alloced)
|
||||||
* For Vports, we append the vport's port symbolic name
|
* For Vports, we append the vport's port symbolic name
|
||||||
* to that of the base port.
|
* to that of the base port.
|
||||||
*/
|
*/
|
||||||
strncpy((char *)psymbl, (char *)&(bfa_fcs_lport_get_psym_name
|
strlcpy(symbl, (char *)&(bfa_fcs_lport_get_psym_name
|
||||||
(bfa_fcs_get_base_port(port->fcs))),
|
(bfa_fcs_get_base_port(port->fcs))),
|
||||||
strlen((char *)&bfa_fcs_lport_get_psym_name(
|
sizeof(symbl));
|
||||||
bfa_fcs_get_base_port(port->fcs))));
|
|
||||||
|
|
||||||
/* Ensure we have a null terminating string. */
|
strlcat(symbl,
|
||||||
((char *)psymbl)[strlen((char *)&bfa_fcs_lport_get_psym_name(
|
|
||||||
bfa_fcs_get_base_port(port->fcs)))] = 0;
|
|
||||||
|
|
||||||
strncat((char *)psymbl,
|
|
||||||
(char *)&(bfa_fcs_lport_get_psym_name(port)),
|
(char *)&(bfa_fcs_lport_get_psym_name(port)),
|
||||||
strlen((char *)&bfa_fcs_lport_get_psym_name(port)));
|
sizeof(symbl));
|
||||||
}
|
}
|
||||||
|
|
||||||
len = fc_rspnid_build(&fchs, bfa_fcxp_get_reqbuf(fcxp),
|
len = fc_rspnid_build(&fchs, bfa_fcxp_get_reqbuf(fcxp),
|
||||||
bfa_fcs_lport_get_fcid(port), 0, psymbl);
|
bfa_fcs_lport_get_fcid(port), 0, symbl);
|
||||||
|
|
||||||
bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE,
|
bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE,
|
||||||
FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0);
|
FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0);
|
||||||
|
|
|
@ -2803,7 +2803,7 @@ void
|
||||||
bfa_ioc_get_adapter_manufacturer(struct bfa_ioc_s *ioc, char *manufacturer)
|
bfa_ioc_get_adapter_manufacturer(struct bfa_ioc_s *ioc, char *manufacturer)
|
||||||
{
|
{
|
||||||
memset((void *)manufacturer, 0, BFA_ADAPTER_MFG_NAME_LEN);
|
memset((void *)manufacturer, 0, BFA_ADAPTER_MFG_NAME_LEN);
|
||||||
strncpy(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN);
|
strlcpy(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
|
|
|
@ -366,8 +366,8 @@ bfa_plog_str(struct bfa_plog_s *plog, enum bfa_plog_mid mid,
|
||||||
lp.eid = event;
|
lp.eid = event;
|
||||||
lp.log_type = BFA_PL_LOG_TYPE_STRING;
|
lp.log_type = BFA_PL_LOG_TYPE_STRING;
|
||||||
lp.misc = misc;
|
lp.misc = misc;
|
||||||
strncpy(lp.log_entry.string_log, log_str,
|
strlcpy(lp.log_entry.string_log, log_str,
|
||||||
BFA_PL_STRING_LOG_SZ - 1);
|
BFA_PL_STRING_LOG_SZ);
|
||||||
lp.log_entry.string_log[BFA_PL_STRING_LOG_SZ - 1] = '\0';
|
lp.log_entry.string_log[BFA_PL_STRING_LOG_SZ - 1] = '\0';
|
||||||
bfa_plog_add(plog, &lp);
|
bfa_plog_add(plog, &lp);
|
||||||
}
|
}
|
||||||
|
|
|
@ -983,20 +983,20 @@ bfad_start_ops(struct bfad_s *bfad) {
|
||||||
|
|
||||||
/* Fill the driver_info info to fcs*/
|
/* Fill the driver_info info to fcs*/
|
||||||
memset(&driver_info, 0, sizeof(driver_info));
|
memset(&driver_info, 0, sizeof(driver_info));
|
||||||
strncpy(driver_info.version, BFAD_DRIVER_VERSION,
|
strlcpy(driver_info.version, BFAD_DRIVER_VERSION,
|
||||||
sizeof(driver_info.version) - 1);
|
sizeof(driver_info.version));
|
||||||
if (host_name)
|
if (host_name)
|
||||||
strncpy(driver_info.host_machine_name, host_name,
|
strlcpy(driver_info.host_machine_name, host_name,
|
||||||
sizeof(driver_info.host_machine_name) - 1);
|
sizeof(driver_info.host_machine_name));
|
||||||
if (os_name)
|
if (os_name)
|
||||||
strncpy(driver_info.host_os_name, os_name,
|
strlcpy(driver_info.host_os_name, os_name,
|
||||||
sizeof(driver_info.host_os_name) - 1);
|
sizeof(driver_info.host_os_name));
|
||||||
if (os_patch)
|
if (os_patch)
|
||||||
strncpy(driver_info.host_os_patch, os_patch,
|
strlcpy(driver_info.host_os_patch, os_patch,
|
||||||
sizeof(driver_info.host_os_patch) - 1);
|
sizeof(driver_info.host_os_patch));
|
||||||
|
|
||||||
strncpy(driver_info.os_device_name, bfad->pci_name,
|
strlcpy(driver_info.os_device_name, bfad->pci_name,
|
||||||
sizeof(driver_info.os_device_name) - 1);
|
sizeof(driver_info.os_device_name));
|
||||||
|
|
||||||
/* FCS driver info init */
|
/* FCS driver info init */
|
||||||
spin_lock_irqsave(&bfad->bfad_lock, flags);
|
spin_lock_irqsave(&bfad->bfad_lock, flags);
|
||||||
|
|
|
@ -843,7 +843,7 @@ bfad_im_symbolic_name_show(struct device *dev, struct device_attribute *attr,
|
||||||
char symname[BFA_SYMNAME_MAXLEN];
|
char symname[BFA_SYMNAME_MAXLEN];
|
||||||
|
|
||||||
bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr);
|
bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr);
|
||||||
strncpy(symname, port_attr.port_cfg.sym_name.symname,
|
strlcpy(symname, port_attr.port_cfg.sym_name.symname,
|
||||||
BFA_SYMNAME_MAXLEN);
|
BFA_SYMNAME_MAXLEN);
|
||||||
return snprintf(buf, PAGE_SIZE, "%s\n", symname);
|
return snprintf(buf, PAGE_SIZE, "%s\n", symname);
|
||||||
}
|
}
|
||||||
|
|
|
@ -127,7 +127,7 @@ bfad_iocmd_ioc_get_attr(struct bfad_s *bfad, void *cmd)
|
||||||
|
|
||||||
/* fill in driver attr info */
|
/* fill in driver attr info */
|
||||||
strcpy(iocmd->ioc_attr.driver_attr.driver, BFAD_DRIVER_NAME);
|
strcpy(iocmd->ioc_attr.driver_attr.driver, BFAD_DRIVER_NAME);
|
||||||
strncpy(iocmd->ioc_attr.driver_attr.driver_ver,
|
strlcpy(iocmd->ioc_attr.driver_attr.driver_ver,
|
||||||
BFAD_DRIVER_VERSION, BFA_VERSION_LEN);
|
BFAD_DRIVER_VERSION, BFA_VERSION_LEN);
|
||||||
strcpy(iocmd->ioc_attr.driver_attr.fw_ver,
|
strcpy(iocmd->ioc_attr.driver_attr.fw_ver,
|
||||||
iocmd->ioc_attr.adapter_attr.fw_ver);
|
iocmd->ioc_attr.adapter_attr.fw_ver);
|
||||||
|
@ -315,9 +315,9 @@ bfad_iocmd_port_get_attr(struct bfad_s *bfad, void *cmd)
|
||||||
iocmd->attr.port_type = port_attr.port_type;
|
iocmd->attr.port_type = port_attr.port_type;
|
||||||
iocmd->attr.loopback = port_attr.loopback;
|
iocmd->attr.loopback = port_attr.loopback;
|
||||||
iocmd->attr.authfail = port_attr.authfail;
|
iocmd->attr.authfail = port_attr.authfail;
|
||||||
strncpy(iocmd->attr.port_symname.symname,
|
strlcpy(iocmd->attr.port_symname.symname,
|
||||||
port_attr.port_cfg.sym_name.symname,
|
port_attr.port_cfg.sym_name.symname,
|
||||||
sizeof(port_attr.port_cfg.sym_name.symname));
|
sizeof(iocmd->attr.port_symname.symname));
|
||||||
|
|
||||||
iocmd->status = BFA_STATUS_OK;
|
iocmd->status = BFA_STATUS_OK;
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -238,14 +238,23 @@ csio_osname(uint8_t *buf, size_t buf_len)
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
csio_append_attrib(uint8_t **ptr, uint16_t type, uint8_t *val, uint16_t len)
|
csio_append_attrib(uint8_t **ptr, uint16_t type, void *val, size_t val_len)
|
||||||
{
|
{
|
||||||
|
uint16_t len;
|
||||||
struct fc_fdmi_attr_entry *ae = (struct fc_fdmi_attr_entry *)*ptr;
|
struct fc_fdmi_attr_entry *ae = (struct fc_fdmi_attr_entry *)*ptr;
|
||||||
|
|
||||||
|
if (WARN_ON(val_len > U16_MAX))
|
||||||
|
return;
|
||||||
|
|
||||||
|
len = val_len;
|
||||||
|
|
||||||
ae->type = htons(type);
|
ae->type = htons(type);
|
||||||
len += 4; /* includes attribute type and length */
|
len += 4; /* includes attribute type and length */
|
||||||
len = (len + 3) & ~3; /* should be multiple of 4 bytes */
|
len = (len + 3) & ~3; /* should be multiple of 4 bytes */
|
||||||
ae->len = htons(len);
|
ae->len = htons(len);
|
||||||
memcpy(ae->value, val, len);
|
memcpy(ae->value, val, val_len);
|
||||||
|
if (len > val_len)
|
||||||
|
memset(ae->value + val_len, 0, len - val_len);
|
||||||
*ptr += len;
|
*ptr += len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -335,7 +344,7 @@ csio_ln_fdmi_rhba_cbfn(struct csio_hw *hw, struct csio_ioreq *fdmi_req)
|
||||||
numattrs++;
|
numattrs++;
|
||||||
val = htonl(FC_PORTSPEED_1GBIT | FC_PORTSPEED_10GBIT);
|
val = htonl(FC_PORTSPEED_1GBIT | FC_PORTSPEED_10GBIT);
|
||||||
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_SUPPORTEDSPEED,
|
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_SUPPORTEDSPEED,
|
||||||
(uint8_t *)&val,
|
&val,
|
||||||
FC_FDMI_PORT_ATTR_SUPPORTEDSPEED_LEN);
|
FC_FDMI_PORT_ATTR_SUPPORTEDSPEED_LEN);
|
||||||
numattrs++;
|
numattrs++;
|
||||||
|
|
||||||
|
@ -346,23 +355,22 @@ csio_ln_fdmi_rhba_cbfn(struct csio_hw *hw, struct csio_ioreq *fdmi_req)
|
||||||
else
|
else
|
||||||
val = htonl(CSIO_HBA_PORTSPEED_UNKNOWN);
|
val = htonl(CSIO_HBA_PORTSPEED_UNKNOWN);
|
||||||
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_CURRENTPORTSPEED,
|
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_CURRENTPORTSPEED,
|
||||||
(uint8_t *)&val,
|
&val, FC_FDMI_PORT_ATTR_CURRENTPORTSPEED_LEN);
|
||||||
FC_FDMI_PORT_ATTR_CURRENTPORTSPEED_LEN);
|
|
||||||
numattrs++;
|
numattrs++;
|
||||||
|
|
||||||
mfs = ln->ln_sparm.csp.sp_bb_data;
|
mfs = ln->ln_sparm.csp.sp_bb_data;
|
||||||
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_MAXFRAMESIZE,
|
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_MAXFRAMESIZE,
|
||||||
(uint8_t *)&mfs, FC_FDMI_PORT_ATTR_MAXFRAMESIZE_LEN);
|
&mfs, sizeof(mfs));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
|
|
||||||
strcpy(buf, "csiostor");
|
strcpy(buf, "csiostor");
|
||||||
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_OSDEVICENAME, buf,
|
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_OSDEVICENAME, buf,
|
||||||
(uint16_t)strlen(buf));
|
strlen(buf));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
|
|
||||||
if (!csio_hostname(buf, sizeof(buf))) {
|
if (!csio_hostname(buf, sizeof(buf))) {
|
||||||
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_HOSTNAME,
|
csio_append_attrib(&pld, FC_FDMI_PORT_ATTR_HOSTNAME,
|
||||||
buf, (uint16_t)strlen(buf));
|
buf, strlen(buf));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
}
|
}
|
||||||
attrib_blk->numattrs = htonl(numattrs);
|
attrib_blk->numattrs = htonl(numattrs);
|
||||||
|
@ -444,33 +452,32 @@ csio_ln_fdmi_dprt_cbfn(struct csio_hw *hw, struct csio_ioreq *fdmi_req)
|
||||||
|
|
||||||
strcpy(buf, "Chelsio Communications");
|
strcpy(buf, "Chelsio Communications");
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MANUFACTURER, buf,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MANUFACTURER, buf,
|
||||||
(uint16_t)strlen(buf));
|
strlen(buf));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_SERIALNUMBER,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_SERIALNUMBER,
|
||||||
hw->vpd.sn, (uint16_t)sizeof(hw->vpd.sn));
|
hw->vpd.sn, sizeof(hw->vpd.sn));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MODEL, hw->vpd.id,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MODEL, hw->vpd.id,
|
||||||
(uint16_t)sizeof(hw->vpd.id));
|
sizeof(hw->vpd.id));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MODELDESCRIPTION,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MODELDESCRIPTION,
|
||||||
hw->model_desc, (uint16_t)strlen(hw->model_desc));
|
hw->model_desc, strlen(hw->model_desc));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_HARDWAREVERSION,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_HARDWAREVERSION,
|
||||||
hw->hw_ver, (uint16_t)sizeof(hw->hw_ver));
|
hw->hw_ver, sizeof(hw->hw_ver));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_FIRMWAREVERSION,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_FIRMWAREVERSION,
|
||||||
hw->fwrev_str, (uint16_t)strlen(hw->fwrev_str));
|
hw->fwrev_str, strlen(hw->fwrev_str));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
|
|
||||||
if (!csio_osname(buf, sizeof(buf))) {
|
if (!csio_osname(buf, sizeof(buf))) {
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_OSNAMEVERSION,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_OSNAMEVERSION,
|
||||||
buf, (uint16_t)strlen(buf));
|
buf, strlen(buf));
|
||||||
numattrs++;
|
numattrs++;
|
||||||
}
|
}
|
||||||
|
|
||||||
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MAXCTPAYLOAD,
|
csio_append_attrib(&pld, FC_FDMI_HBA_ATTR_MAXCTPAYLOAD,
|
||||||
(uint8_t *)&maxpayload,
|
&maxpayload, FC_FDMI_HBA_ATTR_MAXCTPAYLOAD_LEN);
|
||||||
FC_FDMI_HBA_ATTR_MAXCTPAYLOAD_LEN);
|
|
||||||
len = (uint32_t)(pld - (uint8_t *)cmd);
|
len = (uint32_t)(pld - (uint8_t *)cmd);
|
||||||
numattrs++;
|
numattrs++;
|
||||||
attrib_blk->numattrs = htonl(numattrs);
|
attrib_blk->numattrs = htonl(numattrs);
|
||||||
|
@ -1794,6 +1801,8 @@ csio_ln_mgmt_submit_req(struct csio_ioreq *io_req,
|
||||||
struct csio_mgmtm *mgmtm = csio_hw_to_mgmtm(hw);
|
struct csio_mgmtm *mgmtm = csio_hw_to_mgmtm(hw);
|
||||||
int rv;
|
int rv;
|
||||||
|
|
||||||
|
BUG_ON(pld_len > pld->len);
|
||||||
|
|
||||||
io_req->io_cbfn = io_cbfn; /* Upper layer callback handler */
|
io_req->io_cbfn = io_cbfn; /* Upper layer callback handler */
|
||||||
io_req->fw_handle = (uintptr_t) (io_req);
|
io_req->fw_handle = (uintptr_t) (io_req);
|
||||||
io_req->eq_idx = mgmtm->eq_idx;
|
io_req->eq_idx = mgmtm->eq_idx;
|
||||||
|
|
|
@ -33,7 +33,6 @@ struct scsi_dev_info_list_table {
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
static const char spaces[] = " "; /* 16 of them */
|
|
||||||
static unsigned scsi_default_dev_flags;
|
static unsigned scsi_default_dev_flags;
|
||||||
static LIST_HEAD(scsi_dev_info_list);
|
static LIST_HEAD(scsi_dev_info_list);
|
||||||
static char scsi_dev_flags[256];
|
static char scsi_dev_flags[256];
|
||||||
|
@ -298,20 +297,13 @@ static void scsi_strcpy_devinfo(char *name, char *to, size_t to_length,
|
||||||
size_t from_length;
|
size_t from_length;
|
||||||
|
|
||||||
from_length = strlen(from);
|
from_length = strlen(from);
|
||||||
strncpy(to, from, min(to_length, from_length));
|
/* this zero-pads the destination */
|
||||||
if (from_length < to_length) {
|
strncpy(to, from, to_length);
|
||||||
if (compatible) {
|
if (from_length < to_length && !compatible) {
|
||||||
/*
|
/*
|
||||||
* NUL terminate the string if it is short.
|
* space pad the string if it is short.
|
||||||
*/
|
*/
|
||||||
to[from_length] = '\0';
|
memset(&to[from_length], ' ', to_length - from_length);
|
||||||
} else {
|
|
||||||
/*
|
|
||||||
* space pad the string if it is short.
|
|
||||||
*/
|
|
||||||
strncpy(&to[from_length], spaces,
|
|
||||||
to_length - from_length);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (from_length > to_length)
|
if (from_length > to_length)
|
||||||
printk(KERN_WARNING "%s: %s string '%s' is too long\n",
|
printk(KERN_WARNING "%s: %s string '%s' is too long\n",
|
||||||
|
|
|
@ -4110,12 +4110,6 @@ RTY_SEND_CMD:
|
||||||
rtsx_trace(chip);
|
rtsx_trace(chip);
|
||||||
return STATUS_FAIL;
|
return STATUS_FAIL;
|
||||||
}
|
}
|
||||||
|
|
||||||
} else if (rsp_type == SD_RSP_TYPE_R0) {
|
|
||||||
if ((ptr[3] & 0x1E) != 0x03) {
|
|
||||||
rtsx_trace(chip);
|
|
||||||
return STATUS_FAIL;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -131,24 +131,6 @@ static void kgdboc_unregister_kbd(void)
|
||||||
#define kgdboc_restore_input()
|
#define kgdboc_restore_input()
|
||||||
#endif /* ! CONFIG_KDB_KEYBOARD */
|
#endif /* ! CONFIG_KDB_KEYBOARD */
|
||||||
|
|
||||||
static int kgdboc_option_setup(char *opt)
|
|
||||||
{
|
|
||||||
if (!opt) {
|
|
||||||
pr_err("kgdboc: config string not provided\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (strlen(opt) >= MAX_CONFIG_LEN) {
|
|
||||||
printk(KERN_ERR "kgdboc: config string too long\n");
|
|
||||||
return -ENOSPC;
|
|
||||||
}
|
|
||||||
strcpy(config, opt);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
__setup("kgdboc=", kgdboc_option_setup);
|
|
||||||
|
|
||||||
static void cleanup_kgdboc(void)
|
static void cleanup_kgdboc(void)
|
||||||
{
|
{
|
||||||
if (kgdb_unregister_nmi_console())
|
if (kgdb_unregister_nmi_console())
|
||||||
|
@ -162,15 +144,13 @@ static int configure_kgdboc(void)
|
||||||
{
|
{
|
||||||
struct tty_driver *p;
|
struct tty_driver *p;
|
||||||
int tty_line = 0;
|
int tty_line = 0;
|
||||||
int err;
|
int err = -ENODEV;
|
||||||
char *cptr = config;
|
char *cptr = config;
|
||||||
struct console *cons;
|
struct console *cons;
|
||||||
|
|
||||||
err = kgdboc_option_setup(config);
|
if (!strlen(config) || isspace(config[0]))
|
||||||
if (err || !strlen(config) || isspace(config[0]))
|
|
||||||
goto noconfig;
|
goto noconfig;
|
||||||
|
|
||||||
err = -ENODEV;
|
|
||||||
kgdboc_io_ops.is_console = 0;
|
kgdboc_io_ops.is_console = 0;
|
||||||
kgdb_tty_driver = NULL;
|
kgdb_tty_driver = NULL;
|
||||||
|
|
||||||
|
@ -319,6 +299,25 @@ static struct kgdb_io kgdboc_io_ops = {
|
||||||
};
|
};
|
||||||
|
|
||||||
#ifdef CONFIG_KGDB_SERIAL_CONSOLE
|
#ifdef CONFIG_KGDB_SERIAL_CONSOLE
|
||||||
|
static int kgdboc_option_setup(char *opt)
|
||||||
|
{
|
||||||
|
if (!opt) {
|
||||||
|
pr_err("config string not provided\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (strlen(opt) >= MAX_CONFIG_LEN) {
|
||||||
|
pr_err("config string too long\n");
|
||||||
|
return -ENOSPC;
|
||||||
|
}
|
||||||
|
strcpy(config, opt);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
__setup("kgdboc=", kgdboc_option_setup);
|
||||||
|
|
||||||
|
|
||||||
/* This is only available if kgdboc is a built in for early debugging */
|
/* This is only available if kgdboc is a built in for early debugging */
|
||||||
static int __init kgdboc_early_init(char *opt)
|
static int __init kgdboc_early_init(char *opt)
|
||||||
{
|
{
|
||||||
|
|
|
@ -379,11 +379,10 @@ static void set_link_state_by_speed(struct dummy_hcd *dum_hcd)
|
||||||
USB_PORT_STAT_CONNECTION) == 0)
|
USB_PORT_STAT_CONNECTION) == 0)
|
||||||
dum_hcd->port_status |=
|
dum_hcd->port_status |=
|
||||||
(USB_PORT_STAT_C_CONNECTION << 16);
|
(USB_PORT_STAT_C_CONNECTION << 16);
|
||||||
if ((dum_hcd->port_status &
|
if ((dum_hcd->port_status & USB_PORT_STAT_ENABLE) &&
|
||||||
USB_PORT_STAT_ENABLE) == 1 &&
|
(dum_hcd->port_status &
|
||||||
(dum_hcd->port_status &
|
USB_PORT_STAT_LINK_STATE) == USB_SS_PORT_LS_U0 &&
|
||||||
USB_SS_PORT_LS_U0) == 1 &&
|
dum_hcd->rh_state != DUMMY_RH_SUSPENDED)
|
||||||
dum_hcd->rh_state != DUMMY_RH_SUSPENDED)
|
|
||||||
dum_hcd->active = 1;
|
dum_hcd->active = 1;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -9,7 +9,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \
|
||||||
export.o tree-log.o free-space-cache.o zlib.o lzo.o \
|
export.o tree-log.o free-space-cache.o zlib.o lzo.o \
|
||||||
compression.o delayed-ref.o relocation.o delayed-inode.o scrub.o \
|
compression.o delayed-ref.o relocation.o delayed-inode.o scrub.o \
|
||||||
reada.o backref.o ulist.o qgroup.o send.o dev-replace.o raid56.o \
|
reada.o backref.o ulist.o qgroup.o send.o dev-replace.o raid56.o \
|
||||||
uuid-tree.o props.o hash.o free-space-tree.o
|
uuid-tree.o props.o hash.o free-space-tree.o tree-checker.o
|
||||||
|
|
||||||
btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
|
btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
|
||||||
btrfs-$(CONFIG_BTRFS_FS_CHECK_INTEGRITY) += check-integrity.o
|
btrfs-$(CONFIG_BTRFS_FS_CHECK_INTEGRITY) += check-integrity.o
|
||||||
|
|
128
fs/btrfs/ctree.h
128
fs/btrfs/ctree.h
|
@ -1415,7 +1415,7 @@ do { \
|
||||||
#define BTRFS_INODE_ROOT_ITEM_INIT (1 << 31)
|
#define BTRFS_INODE_ROOT_ITEM_INIT (1 << 31)
|
||||||
|
|
||||||
struct btrfs_map_token {
|
struct btrfs_map_token {
|
||||||
struct extent_buffer *eb;
|
const struct extent_buffer *eb;
|
||||||
char *kaddr;
|
char *kaddr;
|
||||||
unsigned long offset;
|
unsigned long offset;
|
||||||
};
|
};
|
||||||
|
@ -1449,18 +1449,19 @@ static inline void btrfs_init_map_token (struct btrfs_map_token *token)
|
||||||
sizeof(((type *)0)->member)))
|
sizeof(((type *)0)->member)))
|
||||||
|
|
||||||
#define DECLARE_BTRFS_SETGET_BITS(bits) \
|
#define DECLARE_BTRFS_SETGET_BITS(bits) \
|
||||||
u##bits btrfs_get_token_##bits(struct extent_buffer *eb, void *ptr, \
|
u##bits btrfs_get_token_##bits(const struct extent_buffer *eb, \
|
||||||
unsigned long off, \
|
const void *ptr, unsigned long off, \
|
||||||
struct btrfs_map_token *token); \
|
struct btrfs_map_token *token); \
|
||||||
void btrfs_set_token_##bits(struct extent_buffer *eb, void *ptr, \
|
void btrfs_set_token_##bits(struct extent_buffer *eb, const void *ptr, \
|
||||||
unsigned long off, u##bits val, \
|
unsigned long off, u##bits val, \
|
||||||
struct btrfs_map_token *token); \
|
struct btrfs_map_token *token); \
|
||||||
static inline u##bits btrfs_get_##bits(struct extent_buffer *eb, void *ptr, \
|
static inline u##bits btrfs_get_##bits(const struct extent_buffer *eb, \
|
||||||
|
const void *ptr, \
|
||||||
unsigned long off) \
|
unsigned long off) \
|
||||||
{ \
|
{ \
|
||||||
return btrfs_get_token_##bits(eb, ptr, off, NULL); \
|
return btrfs_get_token_##bits(eb, ptr, off, NULL); \
|
||||||
} \
|
} \
|
||||||
static inline void btrfs_set_##bits(struct extent_buffer *eb, void *ptr, \
|
static inline void btrfs_set_##bits(struct extent_buffer *eb, void *ptr,\
|
||||||
unsigned long off, u##bits val) \
|
unsigned long off, u##bits val) \
|
||||||
{ \
|
{ \
|
||||||
btrfs_set_token_##bits(eb, ptr, off, val, NULL); \
|
btrfs_set_token_##bits(eb, ptr, off, val, NULL); \
|
||||||
|
@ -1472,7 +1473,8 @@ DECLARE_BTRFS_SETGET_BITS(32)
|
||||||
DECLARE_BTRFS_SETGET_BITS(64)
|
DECLARE_BTRFS_SETGET_BITS(64)
|
||||||
|
|
||||||
#define BTRFS_SETGET_FUNCS(name, type, member, bits) \
|
#define BTRFS_SETGET_FUNCS(name, type, member, bits) \
|
||||||
static inline u##bits btrfs_##name(struct extent_buffer *eb, type *s) \
|
static inline u##bits btrfs_##name(const struct extent_buffer *eb, \
|
||||||
|
const type *s) \
|
||||||
{ \
|
{ \
|
||||||
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
||||||
return btrfs_get_##bits(eb, s, offsetof(type, member)); \
|
return btrfs_get_##bits(eb, s, offsetof(type, member)); \
|
||||||
|
@ -1483,7 +1485,8 @@ static inline void btrfs_set_##name(struct extent_buffer *eb, type *s, \
|
||||||
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
||||||
btrfs_set_##bits(eb, s, offsetof(type, member), val); \
|
btrfs_set_##bits(eb, s, offsetof(type, member), val); \
|
||||||
} \
|
} \
|
||||||
static inline u##bits btrfs_token_##name(struct extent_buffer *eb, type *s, \
|
static inline u##bits btrfs_token_##name(const struct extent_buffer *eb,\
|
||||||
|
const type *s, \
|
||||||
struct btrfs_map_token *token) \
|
struct btrfs_map_token *token) \
|
||||||
{ \
|
{ \
|
||||||
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
BUILD_BUG_ON(sizeof(u##bits) != sizeof(((type *)0))->member); \
|
||||||
|
@ -1498,9 +1501,9 @@ static inline void btrfs_set_token_##name(struct extent_buffer *eb, \
|
||||||
}
|
}
|
||||||
|
|
||||||
#define BTRFS_SETGET_HEADER_FUNCS(name, type, member, bits) \
|
#define BTRFS_SETGET_HEADER_FUNCS(name, type, member, bits) \
|
||||||
static inline u##bits btrfs_##name(struct extent_buffer *eb) \
|
static inline u##bits btrfs_##name(const struct extent_buffer *eb) \
|
||||||
{ \
|
{ \
|
||||||
type *p = page_address(eb->pages[0]); \
|
const type *p = page_address(eb->pages[0]); \
|
||||||
u##bits res = le##bits##_to_cpu(p->member); \
|
u##bits res = le##bits##_to_cpu(p->member); \
|
||||||
return res; \
|
return res; \
|
||||||
} \
|
} \
|
||||||
|
@ -1512,7 +1515,7 @@ static inline void btrfs_set_##name(struct extent_buffer *eb, \
|
||||||
}
|
}
|
||||||
|
|
||||||
#define BTRFS_SETGET_STACK_FUNCS(name, type, member, bits) \
|
#define BTRFS_SETGET_STACK_FUNCS(name, type, member, bits) \
|
||||||
static inline u##bits btrfs_##name(type *s) \
|
static inline u##bits btrfs_##name(const type *s) \
|
||||||
{ \
|
{ \
|
||||||
return le##bits##_to_cpu(s->member); \
|
return le##bits##_to_cpu(s->member); \
|
||||||
} \
|
} \
|
||||||
|
@ -1818,7 +1821,7 @@ static inline unsigned long btrfs_node_key_ptr_offset(int nr)
|
||||||
sizeof(struct btrfs_key_ptr) * nr;
|
sizeof(struct btrfs_key_ptr) * nr;
|
||||||
}
|
}
|
||||||
|
|
||||||
void btrfs_node_key(struct extent_buffer *eb,
|
void btrfs_node_key(const struct extent_buffer *eb,
|
||||||
struct btrfs_disk_key *disk_key, int nr);
|
struct btrfs_disk_key *disk_key, int nr);
|
||||||
|
|
||||||
static inline void btrfs_set_node_key(struct extent_buffer *eb,
|
static inline void btrfs_set_node_key(struct extent_buffer *eb,
|
||||||
|
@ -1847,28 +1850,28 @@ static inline struct btrfs_item *btrfs_item_nr(int nr)
|
||||||
return (struct btrfs_item *)btrfs_item_nr_offset(nr);
|
return (struct btrfs_item *)btrfs_item_nr_offset(nr);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u32 btrfs_item_end(struct extent_buffer *eb,
|
static inline u32 btrfs_item_end(const struct extent_buffer *eb,
|
||||||
struct btrfs_item *item)
|
struct btrfs_item *item)
|
||||||
{
|
{
|
||||||
return btrfs_item_offset(eb, item) + btrfs_item_size(eb, item);
|
return btrfs_item_offset(eb, item) + btrfs_item_size(eb, item);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u32 btrfs_item_end_nr(struct extent_buffer *eb, int nr)
|
static inline u32 btrfs_item_end_nr(const struct extent_buffer *eb, int nr)
|
||||||
{
|
{
|
||||||
return btrfs_item_end(eb, btrfs_item_nr(nr));
|
return btrfs_item_end(eb, btrfs_item_nr(nr));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u32 btrfs_item_offset_nr(struct extent_buffer *eb, int nr)
|
static inline u32 btrfs_item_offset_nr(const struct extent_buffer *eb, int nr)
|
||||||
{
|
{
|
||||||
return btrfs_item_offset(eb, btrfs_item_nr(nr));
|
return btrfs_item_offset(eb, btrfs_item_nr(nr));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u32 btrfs_item_size_nr(struct extent_buffer *eb, int nr)
|
static inline u32 btrfs_item_size_nr(const struct extent_buffer *eb, int nr)
|
||||||
{
|
{
|
||||||
return btrfs_item_size(eb, btrfs_item_nr(nr));
|
return btrfs_item_size(eb, btrfs_item_nr(nr));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_item_key(struct extent_buffer *eb,
|
static inline void btrfs_item_key(const struct extent_buffer *eb,
|
||||||
struct btrfs_disk_key *disk_key, int nr)
|
struct btrfs_disk_key *disk_key, int nr)
|
||||||
{
|
{
|
||||||
struct btrfs_item *item = btrfs_item_nr(nr);
|
struct btrfs_item *item = btrfs_item_nr(nr);
|
||||||
|
@ -1904,8 +1907,8 @@ BTRFS_SETGET_STACK_FUNCS(stack_dir_name_len, struct btrfs_dir_item,
|
||||||
BTRFS_SETGET_STACK_FUNCS(stack_dir_transid, struct btrfs_dir_item,
|
BTRFS_SETGET_STACK_FUNCS(stack_dir_transid, struct btrfs_dir_item,
|
||||||
transid, 64);
|
transid, 64);
|
||||||
|
|
||||||
static inline void btrfs_dir_item_key(struct extent_buffer *eb,
|
static inline void btrfs_dir_item_key(const struct extent_buffer *eb,
|
||||||
struct btrfs_dir_item *item,
|
const struct btrfs_dir_item *item,
|
||||||
struct btrfs_disk_key *key)
|
struct btrfs_disk_key *key)
|
||||||
{
|
{
|
||||||
read_eb_member(eb, item, struct btrfs_dir_item, location, key);
|
read_eb_member(eb, item, struct btrfs_dir_item, location, key);
|
||||||
|
@ -1913,7 +1916,7 @@ static inline void btrfs_dir_item_key(struct extent_buffer *eb,
|
||||||
|
|
||||||
static inline void btrfs_set_dir_item_key(struct extent_buffer *eb,
|
static inline void btrfs_set_dir_item_key(struct extent_buffer *eb,
|
||||||
struct btrfs_dir_item *item,
|
struct btrfs_dir_item *item,
|
||||||
struct btrfs_disk_key *key)
|
const struct btrfs_disk_key *key)
|
||||||
{
|
{
|
||||||
write_eb_member(eb, item, struct btrfs_dir_item, location, key);
|
write_eb_member(eb, item, struct btrfs_dir_item, location, key);
|
||||||
}
|
}
|
||||||
|
@ -1925,8 +1928,8 @@ BTRFS_SETGET_FUNCS(free_space_bitmaps, struct btrfs_free_space_header,
|
||||||
BTRFS_SETGET_FUNCS(free_space_generation, struct btrfs_free_space_header,
|
BTRFS_SETGET_FUNCS(free_space_generation, struct btrfs_free_space_header,
|
||||||
generation, 64);
|
generation, 64);
|
||||||
|
|
||||||
static inline void btrfs_free_space_key(struct extent_buffer *eb,
|
static inline void btrfs_free_space_key(const struct extent_buffer *eb,
|
||||||
struct btrfs_free_space_header *h,
|
const struct btrfs_free_space_header *h,
|
||||||
struct btrfs_disk_key *key)
|
struct btrfs_disk_key *key)
|
||||||
{
|
{
|
||||||
read_eb_member(eb, h, struct btrfs_free_space_header, location, key);
|
read_eb_member(eb, h, struct btrfs_free_space_header, location, key);
|
||||||
|
@ -1934,7 +1937,7 @@ static inline void btrfs_free_space_key(struct extent_buffer *eb,
|
||||||
|
|
||||||
static inline void btrfs_set_free_space_key(struct extent_buffer *eb,
|
static inline void btrfs_set_free_space_key(struct extent_buffer *eb,
|
||||||
struct btrfs_free_space_header *h,
|
struct btrfs_free_space_header *h,
|
||||||
struct btrfs_disk_key *key)
|
const struct btrfs_disk_key *key)
|
||||||
{
|
{
|
||||||
write_eb_member(eb, h, struct btrfs_free_space_header, location, key);
|
write_eb_member(eb, h, struct btrfs_free_space_header, location, key);
|
||||||
}
|
}
|
||||||
|
@ -1961,25 +1964,25 @@ static inline void btrfs_cpu_key_to_disk(struct btrfs_disk_key *disk,
|
||||||
disk->objectid = cpu_to_le64(cpu->objectid);
|
disk->objectid = cpu_to_le64(cpu->objectid);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_node_key_to_cpu(struct extent_buffer *eb,
|
static inline void btrfs_node_key_to_cpu(const struct extent_buffer *eb,
|
||||||
struct btrfs_key *key, int nr)
|
struct btrfs_key *key, int nr)
|
||||||
{
|
{
|
||||||
struct btrfs_disk_key disk_key;
|
struct btrfs_disk_key disk_key;
|
||||||
btrfs_node_key(eb, &disk_key, nr);
|
btrfs_node_key(eb, &disk_key, nr);
|
||||||
btrfs_disk_key_to_cpu(key, &disk_key);
|
btrfs_disk_key_to_cpu(key, &disk_key);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_item_key_to_cpu(struct extent_buffer *eb,
|
static inline void btrfs_item_key_to_cpu(const struct extent_buffer *eb,
|
||||||
struct btrfs_key *key, int nr)
|
struct btrfs_key *key, int nr)
|
||||||
{
|
{
|
||||||
struct btrfs_disk_key disk_key;
|
struct btrfs_disk_key disk_key;
|
||||||
btrfs_item_key(eb, &disk_key, nr);
|
btrfs_item_key(eb, &disk_key, nr);
|
||||||
btrfs_disk_key_to_cpu(key, &disk_key);
|
btrfs_disk_key_to_cpu(key, &disk_key);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_dir_item_key_to_cpu(struct extent_buffer *eb,
|
static inline void btrfs_dir_item_key_to_cpu(const struct extent_buffer *eb,
|
||||||
struct btrfs_dir_item *item,
|
const struct btrfs_dir_item *item,
|
||||||
struct btrfs_key *key)
|
struct btrfs_key *key)
|
||||||
{
|
{
|
||||||
struct btrfs_disk_key disk_key;
|
struct btrfs_disk_key disk_key;
|
||||||
btrfs_dir_item_key(eb, item, &disk_key);
|
btrfs_dir_item_key(eb, item, &disk_key);
|
||||||
|
@ -2012,7 +2015,7 @@ BTRFS_SETGET_STACK_FUNCS(stack_header_nritems, struct btrfs_header,
|
||||||
nritems, 32);
|
nritems, 32);
|
||||||
BTRFS_SETGET_STACK_FUNCS(stack_header_bytenr, struct btrfs_header, bytenr, 64);
|
BTRFS_SETGET_STACK_FUNCS(stack_header_bytenr, struct btrfs_header, bytenr, 64);
|
||||||
|
|
||||||
static inline int btrfs_header_flag(struct extent_buffer *eb, u64 flag)
|
static inline int btrfs_header_flag(const struct extent_buffer *eb, u64 flag)
|
||||||
{
|
{
|
||||||
return (btrfs_header_flags(eb) & flag) == flag;
|
return (btrfs_header_flags(eb) & flag) == flag;
|
||||||
}
|
}
|
||||||
|
@ -2031,7 +2034,7 @@ static inline int btrfs_clear_header_flag(struct extent_buffer *eb, u64 flag)
|
||||||
return (flags & flag) == flag;
|
return (flags & flag) == flag;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int btrfs_header_backref_rev(struct extent_buffer *eb)
|
static inline int btrfs_header_backref_rev(const struct extent_buffer *eb)
|
||||||
{
|
{
|
||||||
u64 flags = btrfs_header_flags(eb);
|
u64 flags = btrfs_header_flags(eb);
|
||||||
return flags >> BTRFS_BACKREF_REV_SHIFT;
|
return flags >> BTRFS_BACKREF_REV_SHIFT;
|
||||||
|
@ -2051,12 +2054,12 @@ static inline unsigned long btrfs_header_fsid(void)
|
||||||
return offsetof(struct btrfs_header, fsid);
|
return offsetof(struct btrfs_header, fsid);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long btrfs_header_chunk_tree_uuid(struct extent_buffer *eb)
|
static inline unsigned long btrfs_header_chunk_tree_uuid(const struct extent_buffer *eb)
|
||||||
{
|
{
|
||||||
return offsetof(struct btrfs_header, chunk_tree_uuid);
|
return offsetof(struct btrfs_header, chunk_tree_uuid);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int btrfs_is_leaf(struct extent_buffer *eb)
|
static inline int btrfs_is_leaf(const struct extent_buffer *eb)
|
||||||
{
|
{
|
||||||
return btrfs_header_level(eb) == 0;
|
return btrfs_header_level(eb) == 0;
|
||||||
}
|
}
|
||||||
|
@ -2090,12 +2093,12 @@ BTRFS_SETGET_STACK_FUNCS(root_stransid, struct btrfs_root_item,
|
||||||
BTRFS_SETGET_STACK_FUNCS(root_rtransid, struct btrfs_root_item,
|
BTRFS_SETGET_STACK_FUNCS(root_rtransid, struct btrfs_root_item,
|
||||||
rtransid, 64);
|
rtransid, 64);
|
||||||
|
|
||||||
static inline bool btrfs_root_readonly(struct btrfs_root *root)
|
static inline bool btrfs_root_readonly(const struct btrfs_root *root)
|
||||||
{
|
{
|
||||||
return (root->root_item.flags & cpu_to_le64(BTRFS_ROOT_SUBVOL_RDONLY)) != 0;
|
return (root->root_item.flags & cpu_to_le64(BTRFS_ROOT_SUBVOL_RDONLY)) != 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool btrfs_root_dead(struct btrfs_root *root)
|
static inline bool btrfs_root_dead(const struct btrfs_root *root)
|
||||||
{
|
{
|
||||||
return (root->root_item.flags & cpu_to_le64(BTRFS_ROOT_SUBVOL_DEAD)) != 0;
|
return (root->root_item.flags & cpu_to_le64(BTRFS_ROOT_SUBVOL_DEAD)) != 0;
|
||||||
}
|
}
|
||||||
|
@ -2152,51 +2155,51 @@ BTRFS_SETGET_STACK_FUNCS(backup_num_devices, struct btrfs_root_backup,
|
||||||
/* struct btrfs_balance_item */
|
/* struct btrfs_balance_item */
|
||||||
BTRFS_SETGET_FUNCS(balance_flags, struct btrfs_balance_item, flags, 64);
|
BTRFS_SETGET_FUNCS(balance_flags, struct btrfs_balance_item, flags, 64);
|
||||||
|
|
||||||
static inline void btrfs_balance_data(struct extent_buffer *eb,
|
static inline void btrfs_balance_data(const struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
const struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
read_eb_member(eb, bi, struct btrfs_balance_item, data, ba);
|
read_eb_member(eb, bi, struct btrfs_balance_item, data, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_set_balance_data(struct extent_buffer *eb,
|
static inline void btrfs_set_balance_data(struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
const struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
write_eb_member(eb, bi, struct btrfs_balance_item, data, ba);
|
write_eb_member(eb, bi, struct btrfs_balance_item, data, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_balance_meta(struct extent_buffer *eb,
|
static inline void btrfs_balance_meta(const struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
const struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
read_eb_member(eb, bi, struct btrfs_balance_item, meta, ba);
|
read_eb_member(eb, bi, struct btrfs_balance_item, meta, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_set_balance_meta(struct extent_buffer *eb,
|
static inline void btrfs_set_balance_meta(struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
const struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
write_eb_member(eb, bi, struct btrfs_balance_item, meta, ba);
|
write_eb_member(eb, bi, struct btrfs_balance_item, meta, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_balance_sys(struct extent_buffer *eb,
|
static inline void btrfs_balance_sys(const struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
const struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
read_eb_member(eb, bi, struct btrfs_balance_item, sys, ba);
|
read_eb_member(eb, bi, struct btrfs_balance_item, sys, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void btrfs_set_balance_sys(struct extent_buffer *eb,
|
static inline void btrfs_set_balance_sys(struct extent_buffer *eb,
|
||||||
struct btrfs_balance_item *bi,
|
struct btrfs_balance_item *bi,
|
||||||
struct btrfs_disk_balance_args *ba)
|
const struct btrfs_disk_balance_args *ba)
|
||||||
{
|
{
|
||||||
write_eb_member(eb, bi, struct btrfs_balance_item, sys, ba);
|
write_eb_member(eb, bi, struct btrfs_balance_item, sys, ba);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu,
|
btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu,
|
||||||
struct btrfs_disk_balance_args *disk)
|
const struct btrfs_disk_balance_args *disk)
|
||||||
{
|
{
|
||||||
memset(cpu, 0, sizeof(*cpu));
|
memset(cpu, 0, sizeof(*cpu));
|
||||||
|
|
||||||
|
@ -2216,7 +2219,7 @@ btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu,
|
||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
btrfs_cpu_balance_args_to_disk(struct btrfs_disk_balance_args *disk,
|
btrfs_cpu_balance_args_to_disk(struct btrfs_disk_balance_args *disk,
|
||||||
struct btrfs_balance_args *cpu)
|
const struct btrfs_balance_args *cpu)
|
||||||
{
|
{
|
||||||
memset(disk, 0, sizeof(*disk));
|
memset(disk, 0, sizeof(*disk));
|
||||||
|
|
||||||
|
@ -2284,7 +2287,7 @@ BTRFS_SETGET_STACK_FUNCS(super_magic, struct btrfs_super_block, magic, 64);
|
||||||
BTRFS_SETGET_STACK_FUNCS(super_uuid_tree_generation, struct btrfs_super_block,
|
BTRFS_SETGET_STACK_FUNCS(super_uuid_tree_generation, struct btrfs_super_block,
|
||||||
uuid_tree_generation, 64);
|
uuid_tree_generation, 64);
|
||||||
|
|
||||||
static inline int btrfs_super_csum_size(struct btrfs_super_block *s)
|
static inline int btrfs_super_csum_size(const struct btrfs_super_block *s)
|
||||||
{
|
{
|
||||||
u16 t = btrfs_super_csum_type(s);
|
u16 t = btrfs_super_csum_type(s);
|
||||||
/*
|
/*
|
||||||
|
@ -2303,8 +2306,8 @@ static inline unsigned long btrfs_leaf_data(struct extent_buffer *l)
|
||||||
* this returns the address of the start of the last item,
|
* this returns the address of the start of the last item,
|
||||||
* which is the stop of the leaf data stack
|
* which is the stop of the leaf data stack
|
||||||
*/
|
*/
|
||||||
static inline unsigned int leaf_data_end(struct btrfs_root *root,
|
static inline unsigned int leaf_data_end(const struct btrfs_root *root,
|
||||||
struct extent_buffer *leaf)
|
const struct extent_buffer *leaf)
|
||||||
{
|
{
|
||||||
u32 nr = btrfs_header_nritems(leaf);
|
u32 nr = btrfs_header_nritems(leaf);
|
||||||
|
|
||||||
|
@ -2329,7 +2332,7 @@ BTRFS_SETGET_STACK_FUNCS(stack_file_extent_compression,
|
||||||
struct btrfs_file_extent_item, compression, 8);
|
struct btrfs_file_extent_item, compression, 8);
|
||||||
|
|
||||||
static inline unsigned long
|
static inline unsigned long
|
||||||
btrfs_file_extent_inline_start(struct btrfs_file_extent_item *e)
|
btrfs_file_extent_inline_start(const struct btrfs_file_extent_item *e)
|
||||||
{
|
{
|
||||||
return (unsigned long)e + BTRFS_FILE_EXTENT_INLINE_DATA_START;
|
return (unsigned long)e + BTRFS_FILE_EXTENT_INLINE_DATA_START;
|
||||||
}
|
}
|
||||||
|
@ -2363,8 +2366,9 @@ BTRFS_SETGET_FUNCS(file_extent_other_encoding, struct btrfs_file_extent_item,
|
||||||
* size of any extent headers. If a file is compressed on disk, this is
|
* size of any extent headers. If a file is compressed on disk, this is
|
||||||
* the compressed size
|
* the compressed size
|
||||||
*/
|
*/
|
||||||
static inline u32 btrfs_file_extent_inline_item_len(struct extent_buffer *eb,
|
static inline u32 btrfs_file_extent_inline_item_len(
|
||||||
struct btrfs_item *e)
|
const struct extent_buffer *eb,
|
||||||
|
struct btrfs_item *e)
|
||||||
{
|
{
|
||||||
return btrfs_item_size(eb, e) - BTRFS_FILE_EXTENT_INLINE_DATA_START;
|
return btrfs_item_size(eb, e) - BTRFS_FILE_EXTENT_INLINE_DATA_START;
|
||||||
}
|
}
|
||||||
|
@ -2372,9 +2376,9 @@ static inline u32 btrfs_file_extent_inline_item_len(struct extent_buffer *eb,
|
||||||
/* this returns the number of file bytes represented by the inline item.
|
/* this returns the number of file bytes represented by the inline item.
|
||||||
* If an item is compressed, this is the uncompressed size
|
* If an item is compressed, this is the uncompressed size
|
||||||
*/
|
*/
|
||||||
static inline u32 btrfs_file_extent_inline_len(struct extent_buffer *eb,
|
static inline u32 btrfs_file_extent_inline_len(const struct extent_buffer *eb,
|
||||||
int slot,
|
int slot,
|
||||||
struct btrfs_file_extent_item *fi)
|
const struct btrfs_file_extent_item *fi)
|
||||||
{
|
{
|
||||||
struct btrfs_map_token token;
|
struct btrfs_map_token token;
|
||||||
|
|
||||||
|
@ -2396,8 +2400,8 @@ static inline u32 btrfs_file_extent_inline_len(struct extent_buffer *eb,
|
||||||
|
|
||||||
|
|
||||||
/* btrfs_dev_stats_item */
|
/* btrfs_dev_stats_item */
|
||||||
static inline u64 btrfs_dev_stats_value(struct extent_buffer *eb,
|
static inline u64 btrfs_dev_stats_value(const struct extent_buffer *eb,
|
||||||
struct btrfs_dev_stats_item *ptr,
|
const struct btrfs_dev_stats_item *ptr,
|
||||||
int index)
|
int index)
|
||||||
{
|
{
|
||||||
u64 val;
|
u64 val;
|
||||||
|
|
|
@ -50,6 +50,7 @@
|
||||||
#include "sysfs.h"
|
#include "sysfs.h"
|
||||||
#include "qgroup.h"
|
#include "qgroup.h"
|
||||||
#include "compression.h"
|
#include "compression.h"
|
||||||
|
#include "tree-checker.h"
|
||||||
|
|
||||||
#ifdef CONFIG_X86
|
#ifdef CONFIG_X86
|
||||||
#include <asm/cpufeature.h>
|
#include <asm/cpufeature.h>
|
||||||
|
@ -452,9 +453,9 @@ static int btree_read_extent_buffer_pages(struct btrfs_root *root,
|
||||||
int mirror_num = 0;
|
int mirror_num = 0;
|
||||||
int failed_mirror = 0;
|
int failed_mirror = 0;
|
||||||
|
|
||||||
clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
|
||||||
io_tree = &BTRFS_I(root->fs_info->btree_inode)->io_tree;
|
io_tree = &BTRFS_I(root->fs_info->btree_inode)->io_tree;
|
||||||
while (1) {
|
while (1) {
|
||||||
|
clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
||||||
ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,
|
ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,
|
||||||
btree_get_extent, mirror_num);
|
btree_get_extent, mirror_num);
|
||||||
if (!ret) {
|
if (!ret) {
|
||||||
|
@ -465,14 +466,6 @@ static int btree_read_extent_buffer_pages(struct btrfs_root *root,
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* This buffer's crc is fine, but its contents are corrupted, so
|
|
||||||
* there is no reason to read the other copies, they won't be
|
|
||||||
* any less wrong.
|
|
||||||
*/
|
|
||||||
if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags))
|
|
||||||
break;
|
|
||||||
|
|
||||||
num_copies = btrfs_num_copies(root->fs_info,
|
num_copies = btrfs_num_copies(root->fs_info,
|
||||||
eb->start, eb->len);
|
eb->start, eb->len);
|
||||||
if (num_copies == 1)
|
if (num_copies == 1)
|
||||||
|
@ -546,145 +539,6 @@ static int check_tree_block_fsid(struct btrfs_fs_info *fs_info,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
#define CORRUPT(reason, eb, root, slot) \
|
|
||||||
btrfs_crit(root->fs_info, "corrupt %s, %s: block=%llu," \
|
|
||||||
" root=%llu, slot=%d", \
|
|
||||||
btrfs_header_level(eb) == 0 ? "leaf" : "node",\
|
|
||||||
reason, btrfs_header_bytenr(eb), root->objectid, slot)
|
|
||||||
|
|
||||||
static noinline int check_leaf(struct btrfs_root *root,
|
|
||||||
struct extent_buffer *leaf)
|
|
||||||
{
|
|
||||||
struct btrfs_key key;
|
|
||||||
struct btrfs_key leaf_key;
|
|
||||||
u32 nritems = btrfs_header_nritems(leaf);
|
|
||||||
int slot;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Extent buffers from a relocation tree have a owner field that
|
|
||||||
* corresponds to the subvolume tree they are based on. So just from an
|
|
||||||
* extent buffer alone we can not find out what is the id of the
|
|
||||||
* corresponding subvolume tree, so we can not figure out if the extent
|
|
||||||
* buffer corresponds to the root of the relocation tree or not. So skip
|
|
||||||
* this check for relocation trees.
|
|
||||||
*/
|
|
||||||
if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
|
|
||||||
struct btrfs_root *check_root;
|
|
||||||
|
|
||||||
key.objectid = btrfs_header_owner(leaf);
|
|
||||||
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
||||||
key.offset = (u64)-1;
|
|
||||||
|
|
||||||
check_root = btrfs_get_fs_root(root->fs_info, &key, false);
|
|
||||||
/*
|
|
||||||
* The only reason we also check NULL here is that during
|
|
||||||
* open_ctree() some roots has not yet been set up.
|
|
||||||
*/
|
|
||||||
if (!IS_ERR_OR_NULL(check_root)) {
|
|
||||||
struct extent_buffer *eb;
|
|
||||||
|
|
||||||
eb = btrfs_root_node(check_root);
|
|
||||||
/* if leaf is the root, then it's fine */
|
|
||||||
if (leaf != eb) {
|
|
||||||
CORRUPT("non-root leaf's nritems is 0",
|
|
||||||
leaf, check_root, 0);
|
|
||||||
free_extent_buffer(eb);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
free_extent_buffer(eb);
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (nritems == 0)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
/* Check the 0 item */
|
|
||||||
if (btrfs_item_offset_nr(leaf, 0) + btrfs_item_size_nr(leaf, 0) !=
|
|
||||||
BTRFS_LEAF_DATA_SIZE(root)) {
|
|
||||||
CORRUPT("invalid item offset size pair", leaf, root, 0);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Check to make sure each items keys are in the correct order and their
|
|
||||||
* offsets make sense. We only have to loop through nritems-1 because
|
|
||||||
* we check the current slot against the next slot, which verifies the
|
|
||||||
* next slot's offset+size makes sense and that the current's slot
|
|
||||||
* offset is correct.
|
|
||||||
*/
|
|
||||||
for (slot = 0; slot < nritems - 1; slot++) {
|
|
||||||
btrfs_item_key_to_cpu(leaf, &leaf_key, slot);
|
|
||||||
btrfs_item_key_to_cpu(leaf, &key, slot + 1);
|
|
||||||
|
|
||||||
/* Make sure the keys are in the right order */
|
|
||||||
if (btrfs_comp_cpu_keys(&leaf_key, &key) >= 0) {
|
|
||||||
CORRUPT("bad key order", leaf, root, slot);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Make sure the offset and ends are right, remember that the
|
|
||||||
* item data starts at the end of the leaf and grows towards the
|
|
||||||
* front.
|
|
||||||
*/
|
|
||||||
if (btrfs_item_offset_nr(leaf, slot) !=
|
|
||||||
btrfs_item_end_nr(leaf, slot + 1)) {
|
|
||||||
CORRUPT("slot offset bad", leaf, root, slot);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Check to make sure that we don't point outside of the leaf,
|
|
||||||
* just in case all the items are consistent to each other, but
|
|
||||||
* all point outside of the leaf.
|
|
||||||
*/
|
|
||||||
if (btrfs_item_end_nr(leaf, slot) >
|
|
||||||
BTRFS_LEAF_DATA_SIZE(root)) {
|
|
||||||
CORRUPT("slot end outside of leaf", leaf, root, slot);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int check_node(struct btrfs_root *root, struct extent_buffer *node)
|
|
||||||
{
|
|
||||||
unsigned long nr = btrfs_header_nritems(node);
|
|
||||||
struct btrfs_key key, next_key;
|
|
||||||
int slot;
|
|
||||||
u64 bytenr;
|
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
if (nr == 0 || nr > BTRFS_NODEPTRS_PER_BLOCK(root)) {
|
|
||||||
btrfs_crit(root->fs_info,
|
|
||||||
"corrupt node: block %llu root %llu nritems %lu",
|
|
||||||
node->start, root->objectid, nr);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (slot = 0; slot < nr - 1; slot++) {
|
|
||||||
bytenr = btrfs_node_blockptr(node, slot);
|
|
||||||
btrfs_node_key_to_cpu(node, &key, slot);
|
|
||||||
btrfs_node_key_to_cpu(node, &next_key, slot + 1);
|
|
||||||
|
|
||||||
if (!bytenr) {
|
|
||||||
CORRUPT("invalid item slot", node, root, slot);
|
|
||||||
ret = -EIO;
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (btrfs_comp_cpu_keys(&key, &next_key) >= 0) {
|
|
||||||
CORRUPT("bad key order", node, root, slot);
|
|
||||||
ret = -EIO;
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
out:
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int btree_readpage_end_io_hook(struct btrfs_io_bio *io_bio,
|
static int btree_readpage_end_io_hook(struct btrfs_io_bio *io_bio,
|
||||||
u64 phy_offset, struct page *page,
|
u64 phy_offset, struct page *page,
|
||||||
u64 start, u64 end, int mirror)
|
u64 start, u64 end, int mirror)
|
||||||
|
@ -750,12 +604,12 @@ static int btree_readpage_end_io_hook(struct btrfs_io_bio *io_bio,
|
||||||
* that we don't try and read the other copies of this block, just
|
* that we don't try and read the other copies of this block, just
|
||||||
* return -EIO.
|
* return -EIO.
|
||||||
*/
|
*/
|
||||||
if (found_level == 0 && check_leaf(root, eb)) {
|
if (found_level == 0 && btrfs_check_leaf_full(root, eb)) {
|
||||||
set_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
set_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (found_level > 0 && check_node(root, eb))
|
if (found_level > 0 && btrfs_check_node(root, eb))
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
|
@ -4086,7 +3940,13 @@ void btrfs_mark_buffer_dirty(struct extent_buffer *buf)
|
||||||
buf->len,
|
buf->len,
|
||||||
root->fs_info->dirty_metadata_batch);
|
root->fs_info->dirty_metadata_batch);
|
||||||
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
||||||
if (btrfs_header_level(buf) == 0 && check_leaf(root, buf)) {
|
/*
|
||||||
|
* Since btrfs_mark_buffer_dirty() can be called with item pointer set
|
||||||
|
* but item data not updated.
|
||||||
|
* So here we should only check item pointers, not item data.
|
||||||
|
*/
|
||||||
|
if (btrfs_header_level(buf) == 0 &&
|
||||||
|
btrfs_check_leaf_relaxed(root, buf)) {
|
||||||
btrfs_print_leaf(root, buf);
|
btrfs_print_leaf(root, buf);
|
||||||
ASSERT(0);
|
ASSERT(0);
|
||||||
}
|
}
|
||||||
|
|
|
@ -9896,6 +9896,8 @@ static int find_first_block_group(struct btrfs_root *root,
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
struct btrfs_key found_key;
|
struct btrfs_key found_key;
|
||||||
struct extent_buffer *leaf;
|
struct extent_buffer *leaf;
|
||||||
|
struct btrfs_block_group_item bg;
|
||||||
|
u64 flags;
|
||||||
int slot;
|
int slot;
|
||||||
|
|
||||||
ret = btrfs_search_slot(NULL, root, key, path, 0, 0);
|
ret = btrfs_search_slot(NULL, root, key, path, 0, 0);
|
||||||
|
@ -9930,8 +9932,32 @@ static int find_first_block_group(struct btrfs_root *root,
|
||||||
"logical %llu len %llu found bg but no related chunk",
|
"logical %llu len %llu found bg but no related chunk",
|
||||||
found_key.objectid, found_key.offset);
|
found_key.objectid, found_key.offset);
|
||||||
ret = -ENOENT;
|
ret = -ENOENT;
|
||||||
|
} else if (em->start != found_key.objectid ||
|
||||||
|
em->len != found_key.offset) {
|
||||||
|
btrfs_err(root->fs_info,
|
||||||
|
"block group %llu len %llu mismatch with chunk %llu len %llu",
|
||||||
|
found_key.objectid, found_key.offset,
|
||||||
|
em->start, em->len);
|
||||||
|
ret = -EUCLEAN;
|
||||||
} else {
|
} else {
|
||||||
ret = 0;
|
read_extent_buffer(leaf, &bg,
|
||||||
|
btrfs_item_ptr_offset(leaf, slot),
|
||||||
|
sizeof(bg));
|
||||||
|
flags = btrfs_block_group_flags(&bg) &
|
||||||
|
BTRFS_BLOCK_GROUP_TYPE_MASK;
|
||||||
|
|
||||||
|
if (flags != (em->map_lookup->type &
|
||||||
|
BTRFS_BLOCK_GROUP_TYPE_MASK)) {
|
||||||
|
btrfs_err(root->fs_info,
|
||||||
|
"block group %llu len %llu type flags 0x%llx mismatch with chunk type flags 0x%llx",
|
||||||
|
found_key.objectid,
|
||||||
|
found_key.offset, flags,
|
||||||
|
(BTRFS_BLOCK_GROUP_TYPE_MASK &
|
||||||
|
em->map_lookup->type));
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
} else {
|
||||||
|
ret = 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
free_extent_map(em);
|
free_extent_map(em);
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -10159,6 +10185,62 @@ btrfs_create_block_group_cache(struct btrfs_root *root, u64 start, u64 size)
|
||||||
return cache;
|
return cache;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Iterate all chunks and verify that each of them has the corresponding block
|
||||||
|
* group
|
||||||
|
*/
|
||||||
|
static int check_chunk_block_group_mappings(struct btrfs_fs_info *fs_info)
|
||||||
|
{
|
||||||
|
struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
|
||||||
|
struct extent_map *em;
|
||||||
|
struct btrfs_block_group_cache *bg;
|
||||||
|
u64 start = 0;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
while (1) {
|
||||||
|
read_lock(&map_tree->map_tree.lock);
|
||||||
|
/*
|
||||||
|
* lookup_extent_mapping will return the first extent map
|
||||||
|
* intersecting the range, so setting @len to 1 is enough to
|
||||||
|
* get the first chunk.
|
||||||
|
*/
|
||||||
|
em = lookup_extent_mapping(&map_tree->map_tree, start, 1);
|
||||||
|
read_unlock(&map_tree->map_tree.lock);
|
||||||
|
if (!em)
|
||||||
|
break;
|
||||||
|
|
||||||
|
bg = btrfs_lookup_block_group(fs_info, em->start);
|
||||||
|
if (!bg) {
|
||||||
|
btrfs_err(fs_info,
|
||||||
|
"chunk start=%llu len=%llu doesn't have corresponding block group",
|
||||||
|
em->start, em->len);
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
free_extent_map(em);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if (bg->key.objectid != em->start ||
|
||||||
|
bg->key.offset != em->len ||
|
||||||
|
(bg->flags & BTRFS_BLOCK_GROUP_TYPE_MASK) !=
|
||||||
|
(em->map_lookup->type & BTRFS_BLOCK_GROUP_TYPE_MASK)) {
|
||||||
|
btrfs_err(fs_info,
|
||||||
|
"chunk start=%llu len=%llu flags=0x%llx doesn't match block group start=%llu len=%llu flags=0x%llx",
|
||||||
|
em->start, em->len,
|
||||||
|
em->map_lookup->type & BTRFS_BLOCK_GROUP_TYPE_MASK,
|
||||||
|
bg->key.objectid, bg->key.offset,
|
||||||
|
bg->flags & BTRFS_BLOCK_GROUP_TYPE_MASK);
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
free_extent_map(em);
|
||||||
|
btrfs_put_block_group(bg);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
start = em->start + em->len;
|
||||||
|
free_extent_map(em);
|
||||||
|
btrfs_put_block_group(bg);
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
int btrfs_read_block_groups(struct btrfs_root *root)
|
int btrfs_read_block_groups(struct btrfs_root *root)
|
||||||
{
|
{
|
||||||
struct btrfs_path *path;
|
struct btrfs_path *path;
|
||||||
|
@ -10343,7 +10425,7 @@ int btrfs_read_block_groups(struct btrfs_root *root)
|
||||||
}
|
}
|
||||||
|
|
||||||
init_global_block_rsv(info);
|
init_global_block_rsv(info);
|
||||||
ret = 0;
|
ret = check_chunk_block_group_mappings(info);
|
||||||
error:
|
error:
|
||||||
btrfs_free_path(path);
|
btrfs_free_path(path);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -5431,9 +5431,8 @@ unlock_exit:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
void read_extent_buffer(struct extent_buffer *eb, void *dstv,
|
void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
|
||||||
unsigned long start,
|
unsigned long start, unsigned long len)
|
||||||
unsigned long len)
|
|
||||||
{
|
{
|
||||||
size_t cur;
|
size_t cur;
|
||||||
size_t offset;
|
size_t offset;
|
||||||
|
@ -5462,9 +5461,9 @@ void read_extent_buffer(struct extent_buffer *eb, void *dstv,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int read_extent_buffer_to_user(struct extent_buffer *eb, void __user *dstv,
|
int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
||||||
unsigned long start,
|
void __user *dstv,
|
||||||
unsigned long len)
|
unsigned long start, unsigned long len)
|
||||||
{
|
{
|
||||||
size_t cur;
|
size_t cur;
|
||||||
size_t offset;
|
size_t offset;
|
||||||
|
@ -5504,10 +5503,10 @@ int read_extent_buffer_to_user(struct extent_buffer *eb, void __user *dstv,
|
||||||
* return 1 if the item spans two pages.
|
* return 1 if the item spans two pages.
|
||||||
* return -EINVAL otherwise.
|
* return -EINVAL otherwise.
|
||||||
*/
|
*/
|
||||||
int map_private_extent_buffer(struct extent_buffer *eb, unsigned long start,
|
int map_private_extent_buffer(const struct extent_buffer *eb,
|
||||||
unsigned long min_len, char **map,
|
unsigned long start, unsigned long min_len,
|
||||||
unsigned long *map_start,
|
char **map, unsigned long *map_start,
|
||||||
unsigned long *map_len)
|
unsigned long *map_len)
|
||||||
{
|
{
|
||||||
size_t offset = start & (PAGE_SIZE - 1);
|
size_t offset = start & (PAGE_SIZE - 1);
|
||||||
char *kaddr;
|
char *kaddr;
|
||||||
|
@ -5541,9 +5540,8 @@ int map_private_extent_buffer(struct extent_buffer *eb, unsigned long start,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int memcmp_extent_buffer(struct extent_buffer *eb, const void *ptrv,
|
int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
|
||||||
unsigned long start,
|
unsigned long start, unsigned long len)
|
||||||
unsigned long len)
|
|
||||||
{
|
{
|
||||||
size_t cur;
|
size_t cur;
|
||||||
size_t offset;
|
size_t offset;
|
||||||
|
|
|
@ -396,14 +396,13 @@ static inline void extent_buffer_get(struct extent_buffer *eb)
|
||||||
atomic_inc(&eb->refs);
|
atomic_inc(&eb->refs);
|
||||||
}
|
}
|
||||||
|
|
||||||
int memcmp_extent_buffer(struct extent_buffer *eb, const void *ptrv,
|
int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
|
||||||
unsigned long start,
|
unsigned long start, unsigned long len);
|
||||||
unsigned long len);
|
void read_extent_buffer(const struct extent_buffer *eb, void *dst,
|
||||||
void read_extent_buffer(struct extent_buffer *eb, void *dst,
|
|
||||||
unsigned long start,
|
unsigned long start,
|
||||||
unsigned long len);
|
unsigned long len);
|
||||||
int read_extent_buffer_to_user(struct extent_buffer *eb, void __user *dst,
|
int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
||||||
unsigned long start,
|
void __user *dst, unsigned long start,
|
||||||
unsigned long len);
|
unsigned long len);
|
||||||
void write_extent_buffer(struct extent_buffer *eb, const void *src,
|
void write_extent_buffer(struct extent_buffer *eb, const void *src,
|
||||||
unsigned long start, unsigned long len);
|
unsigned long start, unsigned long len);
|
||||||
|
@ -428,10 +427,10 @@ void set_extent_buffer_uptodate(struct extent_buffer *eb);
|
||||||
void clear_extent_buffer_uptodate(struct extent_buffer *eb);
|
void clear_extent_buffer_uptodate(struct extent_buffer *eb);
|
||||||
int extent_buffer_uptodate(struct extent_buffer *eb);
|
int extent_buffer_uptodate(struct extent_buffer *eb);
|
||||||
int extent_buffer_under_io(struct extent_buffer *eb);
|
int extent_buffer_under_io(struct extent_buffer *eb);
|
||||||
int map_private_extent_buffer(struct extent_buffer *eb, unsigned long offset,
|
int map_private_extent_buffer(const struct extent_buffer *eb,
|
||||||
unsigned long min_len, char **map,
|
unsigned long offset, unsigned long min_len,
|
||||||
unsigned long *map_start,
|
char **map, unsigned long *map_start,
|
||||||
unsigned long *map_len);
|
unsigned long *map_len);
|
||||||
void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end);
|
void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end);
|
||||||
void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
|
void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
|
||||||
void extent_clear_unlock_delalloc(struct inode *inode, u64 start, u64 end,
|
void extent_clear_unlock_delalloc(struct inode *inode, u64 start, u64 end,
|
||||||
|
|
|
@ -2464,6 +2464,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
|
||||||
struct rb_node *n;
|
struct rb_node *n;
|
||||||
int count = 0;
|
int count = 0;
|
||||||
|
|
||||||
|
spin_lock(&ctl->tree_lock);
|
||||||
for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
|
for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
|
||||||
info = rb_entry(n, struct btrfs_free_space, offset_index);
|
info = rb_entry(n, struct btrfs_free_space, offset_index);
|
||||||
if (info->bytes >= bytes && !block_group->ro)
|
if (info->bytes >= bytes && !block_group->ro)
|
||||||
|
@ -2473,6 +2474,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
|
||||||
info->offset, info->bytes,
|
info->offset, info->bytes,
|
||||||
(info->bitmap) ? "yes" : "no");
|
(info->bitmap) ? "yes" : "no");
|
||||||
}
|
}
|
||||||
|
spin_unlock(&ctl->tree_lock);
|
||||||
btrfs_info(block_group->fs_info, "block group has cluster?: %s",
|
btrfs_info(block_group->fs_info, "block group has cluster?: %s",
|
||||||
list_empty(&block_group->cluster_list) ? "no" : "yes");
|
list_empty(&block_group->cluster_list) ? "no" : "yes");
|
||||||
btrfs_info(block_group->fs_info,
|
btrfs_info(block_group->fs_info,
|
||||||
|
|
|
@ -50,8 +50,8 @@ static inline void put_unaligned_le8(u8 val, void *p)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define DEFINE_BTRFS_SETGET_BITS(bits) \
|
#define DEFINE_BTRFS_SETGET_BITS(bits) \
|
||||||
u##bits btrfs_get_token_##bits(struct extent_buffer *eb, void *ptr, \
|
u##bits btrfs_get_token_##bits(const struct extent_buffer *eb, \
|
||||||
unsigned long off, \
|
const void *ptr, unsigned long off, \
|
||||||
struct btrfs_map_token *token) \
|
struct btrfs_map_token *token) \
|
||||||
{ \
|
{ \
|
||||||
unsigned long part_offset = (unsigned long)ptr; \
|
unsigned long part_offset = (unsigned long)ptr; \
|
||||||
|
@ -90,7 +90,8 @@ u##bits btrfs_get_token_##bits(struct extent_buffer *eb, void *ptr, \
|
||||||
return res; \
|
return res; \
|
||||||
} \
|
} \
|
||||||
void btrfs_set_token_##bits(struct extent_buffer *eb, \
|
void btrfs_set_token_##bits(struct extent_buffer *eb, \
|
||||||
void *ptr, unsigned long off, u##bits val, \
|
const void *ptr, unsigned long off, \
|
||||||
|
u##bits val, \
|
||||||
struct btrfs_map_token *token) \
|
struct btrfs_map_token *token) \
|
||||||
{ \
|
{ \
|
||||||
unsigned long part_offset = (unsigned long)ptr; \
|
unsigned long part_offset = (unsigned long)ptr; \
|
||||||
|
@ -133,7 +134,7 @@ DEFINE_BTRFS_SETGET_BITS(16)
|
||||||
DEFINE_BTRFS_SETGET_BITS(32)
|
DEFINE_BTRFS_SETGET_BITS(32)
|
||||||
DEFINE_BTRFS_SETGET_BITS(64)
|
DEFINE_BTRFS_SETGET_BITS(64)
|
||||||
|
|
||||||
void btrfs_node_key(struct extent_buffer *eb,
|
void btrfs_node_key(const struct extent_buffer *eb,
|
||||||
struct btrfs_disk_key *disk_key, int nr)
|
struct btrfs_disk_key *disk_key, int nr)
|
||||||
{
|
{
|
||||||
unsigned long ptr = btrfs_node_key_ptr_offset(nr);
|
unsigned long ptr = btrfs_node_key_ptr_offset(nr);
|
||||||
|
|
649
fs/btrfs/tree-checker.c
Normal file
649
fs/btrfs/tree-checker.c
Normal file
|
@ -0,0 +1,649 @@
|
||||||
|
/*
|
||||||
|
* Copyright (C) Qu Wenruo 2017. All rights reserved.
|
||||||
|
*
|
||||||
|
* This program is free software; you can redistribute it and/or
|
||||||
|
* modify it under the terms of the GNU General Public
|
||||||
|
* License v2 as published by the Free Software Foundation.
|
||||||
|
*
|
||||||
|
* This program is distributed in the hope that it will be useful,
|
||||||
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
* General Public License for more details.
|
||||||
|
*
|
||||||
|
* You should have received a copy of the GNU General Public
|
||||||
|
* License along with this program.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The module is used to catch unexpected/corrupted tree block data.
|
||||||
|
* Such behavior can be caused either by a fuzzed image or bugs.
|
||||||
|
*
|
||||||
|
* The objective is to do leaf/node validation checks when tree block is read
|
||||||
|
* from disk, and check *every* possible member, so other code won't
|
||||||
|
* need to checking them again.
|
||||||
|
*
|
||||||
|
* Due to the potential and unwanted damage, every checker needs to be
|
||||||
|
* carefully reviewed otherwise so it does not prevent mount of valid images.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "ctree.h"
|
||||||
|
#include "tree-checker.h"
|
||||||
|
#include "disk-io.h"
|
||||||
|
#include "compression.h"
|
||||||
|
#include "hash.h"
|
||||||
|
#include "volumes.h"
|
||||||
|
|
||||||
|
#define CORRUPT(reason, eb, root, slot) \
|
||||||
|
btrfs_crit(root->fs_info, \
|
||||||
|
"corrupt %s, %s: block=%llu, root=%llu, slot=%d", \
|
||||||
|
btrfs_header_level(eb) == 0 ? "leaf" : "node", \
|
||||||
|
reason, btrfs_header_bytenr(eb), root->objectid, slot)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Error message should follow the following format:
|
||||||
|
* corrupt <type>: <identifier>, <reason>[, <bad_value>]
|
||||||
|
*
|
||||||
|
* @type: leaf or node
|
||||||
|
* @identifier: the necessary info to locate the leaf/node.
|
||||||
|
* It's recommened to decode key.objecitd/offset if it's
|
||||||
|
* meaningful.
|
||||||
|
* @reason: describe the error
|
||||||
|
* @bad_value: optional, it's recommened to output bad value and its
|
||||||
|
* expected value (range).
|
||||||
|
*
|
||||||
|
* Since comma is used to separate the components, only space is allowed
|
||||||
|
* inside each component.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Append generic "corrupt leaf/node root=%llu block=%llu slot=%d: " to @fmt.
|
||||||
|
* Allows callers to customize the output.
|
||||||
|
*/
|
||||||
|
__printf(4, 5)
|
||||||
|
static void generic_err(const struct btrfs_root *root,
|
||||||
|
const struct extent_buffer *eb, int slot,
|
||||||
|
const char *fmt, ...)
|
||||||
|
{
|
||||||
|
struct va_format vaf;
|
||||||
|
va_list args;
|
||||||
|
|
||||||
|
va_start(args, fmt);
|
||||||
|
|
||||||
|
vaf.fmt = fmt;
|
||||||
|
vaf.va = &args;
|
||||||
|
|
||||||
|
btrfs_crit(root->fs_info,
|
||||||
|
"corrupt %s: root=%llu block=%llu slot=%d, %pV",
|
||||||
|
btrfs_header_level(eb) == 0 ? "leaf" : "node",
|
||||||
|
root->objectid, btrfs_header_bytenr(eb), slot, &vaf);
|
||||||
|
va_end(args);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int check_extent_data_item(struct btrfs_root *root,
|
||||||
|
struct extent_buffer *leaf,
|
||||||
|
struct btrfs_key *key, int slot)
|
||||||
|
{
|
||||||
|
struct btrfs_file_extent_item *fi;
|
||||||
|
u32 sectorsize = root->sectorsize;
|
||||||
|
u32 item_size = btrfs_item_size_nr(leaf, slot);
|
||||||
|
|
||||||
|
if (!IS_ALIGNED(key->offset, sectorsize)) {
|
||||||
|
CORRUPT("unaligned key offset for file extent",
|
||||||
|
leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
|
||||||
|
|
||||||
|
if (btrfs_file_extent_type(leaf, fi) > BTRFS_FILE_EXTENT_TYPES) {
|
||||||
|
CORRUPT("invalid file extent type", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Support for new compression/encrption must introduce incompat flag,
|
||||||
|
* and must be caught in open_ctree().
|
||||||
|
*/
|
||||||
|
if (btrfs_file_extent_compression(leaf, fi) > BTRFS_COMPRESS_TYPES) {
|
||||||
|
CORRUPT("invalid file extent compression", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (btrfs_file_extent_encryption(leaf, fi)) {
|
||||||
|
CORRUPT("invalid file extent encryption", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) {
|
||||||
|
/* Inline extent must have 0 as key offset */
|
||||||
|
if (key->offset) {
|
||||||
|
CORRUPT("inline extent has non-zero key offset",
|
||||||
|
leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Compressed inline extent has no on-disk size, skip it */
|
||||||
|
if (btrfs_file_extent_compression(leaf, fi) !=
|
||||||
|
BTRFS_COMPRESS_NONE)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* Uncompressed inline extent size must match item size */
|
||||||
|
if (item_size != BTRFS_FILE_EXTENT_INLINE_DATA_START +
|
||||||
|
btrfs_file_extent_ram_bytes(leaf, fi)) {
|
||||||
|
CORRUPT("plaintext inline extent has invalid size",
|
||||||
|
leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Regular or preallocated extent has fixed item size */
|
||||||
|
if (item_size != sizeof(*fi)) {
|
||||||
|
CORRUPT(
|
||||||
|
"regluar or preallocated extent data item size is invalid",
|
||||||
|
leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (!IS_ALIGNED(btrfs_file_extent_ram_bytes(leaf, fi), sectorsize) ||
|
||||||
|
!IS_ALIGNED(btrfs_file_extent_disk_bytenr(leaf, fi), sectorsize) ||
|
||||||
|
!IS_ALIGNED(btrfs_file_extent_disk_num_bytes(leaf, fi), sectorsize) ||
|
||||||
|
!IS_ALIGNED(btrfs_file_extent_offset(leaf, fi), sectorsize) ||
|
||||||
|
!IS_ALIGNED(btrfs_file_extent_num_bytes(leaf, fi), sectorsize)) {
|
||||||
|
CORRUPT(
|
||||||
|
"regular or preallocated extent data item has unaligned value",
|
||||||
|
leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int check_csum_item(struct btrfs_root *root, struct extent_buffer *leaf,
|
||||||
|
struct btrfs_key *key, int slot)
|
||||||
|
{
|
||||||
|
u32 sectorsize = root->sectorsize;
|
||||||
|
u32 csumsize = btrfs_super_csum_size(root->fs_info->super_copy);
|
||||||
|
|
||||||
|
if (key->objectid != BTRFS_EXTENT_CSUM_OBJECTID) {
|
||||||
|
CORRUPT("invalid objectid for csum item", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (!IS_ALIGNED(key->offset, sectorsize)) {
|
||||||
|
CORRUPT("unaligned key offset for csum item", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (!IS_ALIGNED(btrfs_item_size_nr(leaf, slot), csumsize)) {
|
||||||
|
CORRUPT("unaligned csum item size", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Customized reported for dir_item, only important new info is key->objectid,
|
||||||
|
* which represents inode number
|
||||||
|
*/
|
||||||
|
__printf(4, 5)
|
||||||
|
static void dir_item_err(const struct btrfs_root *root,
|
||||||
|
const struct extent_buffer *eb, int slot,
|
||||||
|
const char *fmt, ...)
|
||||||
|
{
|
||||||
|
struct btrfs_key key;
|
||||||
|
struct va_format vaf;
|
||||||
|
va_list args;
|
||||||
|
|
||||||
|
btrfs_item_key_to_cpu(eb, &key, slot);
|
||||||
|
va_start(args, fmt);
|
||||||
|
|
||||||
|
vaf.fmt = fmt;
|
||||||
|
vaf.va = &args;
|
||||||
|
|
||||||
|
btrfs_crit(root->fs_info,
|
||||||
|
"corrupt %s: root=%llu block=%llu slot=%d ino=%llu, %pV",
|
||||||
|
btrfs_header_level(eb) == 0 ? "leaf" : "node", root->objectid,
|
||||||
|
btrfs_header_bytenr(eb), slot, key.objectid, &vaf);
|
||||||
|
va_end(args);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int check_dir_item(struct btrfs_root *root,
|
||||||
|
struct extent_buffer *leaf,
|
||||||
|
struct btrfs_key *key, int slot)
|
||||||
|
{
|
||||||
|
struct btrfs_dir_item *di;
|
||||||
|
u32 item_size = btrfs_item_size_nr(leaf, slot);
|
||||||
|
u32 cur = 0;
|
||||||
|
|
||||||
|
di = btrfs_item_ptr(leaf, slot, struct btrfs_dir_item);
|
||||||
|
while (cur < item_size) {
|
||||||
|
u32 name_len;
|
||||||
|
u32 data_len;
|
||||||
|
u32 max_name_len;
|
||||||
|
u32 total_size;
|
||||||
|
u32 name_hash;
|
||||||
|
u8 dir_type;
|
||||||
|
|
||||||
|
/* header itself should not cross item boundary */
|
||||||
|
if (cur + sizeof(*di) > item_size) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"dir item header crosses item boundary, have %zu boundary %u",
|
||||||
|
cur + sizeof(*di), item_size);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* dir type check */
|
||||||
|
dir_type = btrfs_dir_type(leaf, di);
|
||||||
|
if (dir_type >= BTRFS_FT_MAX) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"invalid dir item type, have %u expect [0, %u)",
|
||||||
|
dir_type, BTRFS_FT_MAX);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (key->type == BTRFS_XATTR_ITEM_KEY &&
|
||||||
|
dir_type != BTRFS_FT_XATTR) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"invalid dir item type for XATTR key, have %u expect %u",
|
||||||
|
dir_type, BTRFS_FT_XATTR);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (dir_type == BTRFS_FT_XATTR &&
|
||||||
|
key->type != BTRFS_XATTR_ITEM_KEY) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"xattr dir type found for non-XATTR key");
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (dir_type == BTRFS_FT_XATTR)
|
||||||
|
max_name_len = XATTR_NAME_MAX;
|
||||||
|
else
|
||||||
|
max_name_len = BTRFS_NAME_LEN;
|
||||||
|
|
||||||
|
/* Name/data length check */
|
||||||
|
name_len = btrfs_dir_name_len(leaf, di);
|
||||||
|
data_len = btrfs_dir_data_len(leaf, di);
|
||||||
|
if (name_len > max_name_len) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"dir item name len too long, have %u max %u",
|
||||||
|
name_len, max_name_len);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (name_len + data_len > BTRFS_MAX_XATTR_SIZE(root)) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"dir item name and data len too long, have %u max %u",
|
||||||
|
name_len + data_len,
|
||||||
|
BTRFS_MAX_XATTR_SIZE(root));
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (data_len && dir_type != BTRFS_FT_XATTR) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"dir item with invalid data len, have %u expect 0",
|
||||||
|
data_len);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
total_size = sizeof(*di) + name_len + data_len;
|
||||||
|
|
||||||
|
/* header and name/data should not cross item boundary */
|
||||||
|
if (cur + total_size > item_size) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"dir item data crosses item boundary, have %u boundary %u",
|
||||||
|
cur + total_size, item_size);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Special check for XATTR/DIR_ITEM, as key->offset is name
|
||||||
|
* hash, should match its name
|
||||||
|
*/
|
||||||
|
if (key->type == BTRFS_DIR_ITEM_KEY ||
|
||||||
|
key->type == BTRFS_XATTR_ITEM_KEY) {
|
||||||
|
char namebuf[max(BTRFS_NAME_LEN, XATTR_NAME_MAX)];
|
||||||
|
|
||||||
|
read_extent_buffer(leaf, namebuf,
|
||||||
|
(unsigned long)(di + 1), name_len);
|
||||||
|
name_hash = btrfs_name_hash(namebuf, name_len);
|
||||||
|
if (key->offset != name_hash) {
|
||||||
|
dir_item_err(root, leaf, slot,
|
||||||
|
"name hash mismatch with key, have 0x%016x expect 0x%016llx",
|
||||||
|
name_hash, key->offset);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cur += total_size;
|
||||||
|
di = (struct btrfs_dir_item *)((void *)di + total_size);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
__printf(4, 5)
|
||||||
|
__cold
|
||||||
|
static void block_group_err(const struct btrfs_fs_info *fs_info,
|
||||||
|
const struct extent_buffer *eb, int slot,
|
||||||
|
const char *fmt, ...)
|
||||||
|
{
|
||||||
|
struct btrfs_key key;
|
||||||
|
struct va_format vaf;
|
||||||
|
va_list args;
|
||||||
|
|
||||||
|
btrfs_item_key_to_cpu(eb, &key, slot);
|
||||||
|
va_start(args, fmt);
|
||||||
|
|
||||||
|
vaf.fmt = fmt;
|
||||||
|
vaf.va = &args;
|
||||||
|
|
||||||
|
btrfs_crit(fs_info,
|
||||||
|
"corrupt %s: root=%llu block=%llu slot=%d bg_start=%llu bg_len=%llu, %pV",
|
||||||
|
btrfs_header_level(eb) == 0 ? "leaf" : "node",
|
||||||
|
btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot,
|
||||||
|
key.objectid, key.offset, &vaf);
|
||||||
|
va_end(args);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int check_block_group_item(struct btrfs_fs_info *fs_info,
|
||||||
|
struct extent_buffer *leaf,
|
||||||
|
struct btrfs_key *key, int slot)
|
||||||
|
{
|
||||||
|
struct btrfs_block_group_item bgi;
|
||||||
|
u32 item_size = btrfs_item_size_nr(leaf, slot);
|
||||||
|
u64 flags;
|
||||||
|
u64 type;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Here we don't really care about alignment since extent allocator can
|
||||||
|
* handle it. We care more about the size, as if one block group is
|
||||||
|
* larger than maximum size, it's must be some obvious corruption.
|
||||||
|
*/
|
||||||
|
if (key->offset > BTRFS_MAX_DATA_CHUNK_SIZE || key->offset == 0) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid block group size, have %llu expect (0, %llu]",
|
||||||
|
key->offset, BTRFS_MAX_DATA_CHUNK_SIZE);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (item_size != sizeof(bgi)) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid item size, have %u expect %zu",
|
||||||
|
item_size, sizeof(bgi));
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
read_extent_buffer(leaf, &bgi, btrfs_item_ptr_offset(leaf, slot),
|
||||||
|
sizeof(bgi));
|
||||||
|
if (btrfs_block_group_chunk_objectid(&bgi) !=
|
||||||
|
BTRFS_FIRST_CHUNK_TREE_OBJECTID) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid block group chunk objectid, have %llu expect %llu",
|
||||||
|
btrfs_block_group_chunk_objectid(&bgi),
|
||||||
|
BTRFS_FIRST_CHUNK_TREE_OBJECTID);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (btrfs_block_group_used(&bgi) > key->offset) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid block group used, have %llu expect [0, %llu)",
|
||||||
|
btrfs_block_group_used(&bgi), key->offset);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
flags = btrfs_block_group_flags(&bgi);
|
||||||
|
if (hweight64(flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) > 1) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid profile flags, have 0x%llx (%lu bits set) expect no more than 1 bit set",
|
||||||
|
flags & BTRFS_BLOCK_GROUP_PROFILE_MASK,
|
||||||
|
hweight64(flags & BTRFS_BLOCK_GROUP_PROFILE_MASK));
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
type = flags & BTRFS_BLOCK_GROUP_TYPE_MASK;
|
||||||
|
if (type != BTRFS_BLOCK_GROUP_DATA &&
|
||||||
|
type != BTRFS_BLOCK_GROUP_METADATA &&
|
||||||
|
type != BTRFS_BLOCK_GROUP_SYSTEM &&
|
||||||
|
type != (BTRFS_BLOCK_GROUP_METADATA |
|
||||||
|
BTRFS_BLOCK_GROUP_DATA)) {
|
||||||
|
block_group_err(fs_info, leaf, slot,
|
||||||
|
"invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llx or 0x%llx",
|
||||||
|
type, hweight64(type),
|
||||||
|
BTRFS_BLOCK_GROUP_DATA, BTRFS_BLOCK_GROUP_METADATA,
|
||||||
|
BTRFS_BLOCK_GROUP_SYSTEM,
|
||||||
|
BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Common point to switch the item-specific validation.
|
||||||
|
*/
|
||||||
|
static int check_leaf_item(struct btrfs_root *root,
|
||||||
|
struct extent_buffer *leaf,
|
||||||
|
struct btrfs_key *key, int slot)
|
||||||
|
{
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
switch (key->type) {
|
||||||
|
case BTRFS_EXTENT_DATA_KEY:
|
||||||
|
ret = check_extent_data_item(root, leaf, key, slot);
|
||||||
|
break;
|
||||||
|
case BTRFS_EXTENT_CSUM_KEY:
|
||||||
|
ret = check_csum_item(root, leaf, key, slot);
|
||||||
|
break;
|
||||||
|
case BTRFS_DIR_ITEM_KEY:
|
||||||
|
case BTRFS_DIR_INDEX_KEY:
|
||||||
|
case BTRFS_XATTR_ITEM_KEY:
|
||||||
|
ret = check_dir_item(root, leaf, key, slot);
|
||||||
|
break;
|
||||||
|
case BTRFS_BLOCK_GROUP_ITEM_KEY:
|
||||||
|
ret = check_block_group_item(root->fs_info, leaf, key, slot);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int check_leaf(struct btrfs_root *root, struct extent_buffer *leaf,
|
||||||
|
bool check_item_data)
|
||||||
|
{
|
||||||
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
||||||
|
/* No valid key type is 0, so all key should be larger than this key */
|
||||||
|
struct btrfs_key prev_key = {0, 0, 0};
|
||||||
|
struct btrfs_key key;
|
||||||
|
u32 nritems = btrfs_header_nritems(leaf);
|
||||||
|
int slot;
|
||||||
|
|
||||||
|
if (btrfs_header_level(leaf) != 0) {
|
||||||
|
generic_err(root, leaf, 0,
|
||||||
|
"invalid level for leaf, have %d expect 0",
|
||||||
|
btrfs_header_level(leaf));
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Extent buffers from a relocation tree have a owner field that
|
||||||
|
* corresponds to the subvolume tree they are based on. So just from an
|
||||||
|
* extent buffer alone we can not find out what is the id of the
|
||||||
|
* corresponding subvolume tree, so we can not figure out if the extent
|
||||||
|
* buffer corresponds to the root of the relocation tree or not. So
|
||||||
|
* skip this check for relocation trees.
|
||||||
|
*/
|
||||||
|
if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
|
||||||
|
u64 owner = btrfs_header_owner(leaf);
|
||||||
|
struct btrfs_root *check_root;
|
||||||
|
|
||||||
|
/* These trees must never be empty */
|
||||||
|
if (owner == BTRFS_ROOT_TREE_OBJECTID ||
|
||||||
|
owner == BTRFS_CHUNK_TREE_OBJECTID ||
|
||||||
|
owner == BTRFS_EXTENT_TREE_OBJECTID ||
|
||||||
|
owner == BTRFS_DEV_TREE_OBJECTID ||
|
||||||
|
owner == BTRFS_FS_TREE_OBJECTID ||
|
||||||
|
owner == BTRFS_DATA_RELOC_TREE_OBJECTID) {
|
||||||
|
generic_err(root, leaf, 0,
|
||||||
|
"invalid root, root %llu must never be empty",
|
||||||
|
owner);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
key.objectid = owner;
|
||||||
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
||||||
|
key.offset = (u64)-1;
|
||||||
|
|
||||||
|
check_root = btrfs_get_fs_root(fs_info, &key, false);
|
||||||
|
/*
|
||||||
|
* The only reason we also check NULL here is that during
|
||||||
|
* open_ctree() some roots has not yet been set up.
|
||||||
|
*/
|
||||||
|
if (!IS_ERR_OR_NULL(check_root)) {
|
||||||
|
struct extent_buffer *eb;
|
||||||
|
|
||||||
|
eb = btrfs_root_node(check_root);
|
||||||
|
/* if leaf is the root, then it's fine */
|
||||||
|
if (leaf != eb) {
|
||||||
|
CORRUPT("non-root leaf's nritems is 0",
|
||||||
|
leaf, check_root, 0);
|
||||||
|
free_extent_buffer(eb);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
free_extent_buffer(eb);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (nritems == 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Check the following things to make sure this is a good leaf, and
|
||||||
|
* leaf users won't need to bother with similar sanity checks:
|
||||||
|
*
|
||||||
|
* 1) key ordering
|
||||||
|
* 2) item offset and size
|
||||||
|
* No overlap, no hole, all inside the leaf.
|
||||||
|
* 3) item content
|
||||||
|
* If possible, do comprehensive sanity check.
|
||||||
|
* NOTE: All checks must only rely on the item data itself.
|
||||||
|
*/
|
||||||
|
for (slot = 0; slot < nritems; slot++) {
|
||||||
|
u32 item_end_expected;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
btrfs_item_key_to_cpu(leaf, &key, slot);
|
||||||
|
|
||||||
|
/* Make sure the keys are in the right order */
|
||||||
|
if (btrfs_comp_cpu_keys(&prev_key, &key) >= 0) {
|
||||||
|
CORRUPT("bad key order", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Make sure the offset and ends are right, remember that the
|
||||||
|
* item data starts at the end of the leaf and grows towards the
|
||||||
|
* front.
|
||||||
|
*/
|
||||||
|
if (slot == 0)
|
||||||
|
item_end_expected = BTRFS_LEAF_DATA_SIZE(root);
|
||||||
|
else
|
||||||
|
item_end_expected = btrfs_item_offset_nr(leaf,
|
||||||
|
slot - 1);
|
||||||
|
if (btrfs_item_end_nr(leaf, slot) != item_end_expected) {
|
||||||
|
CORRUPT("slot offset bad", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Check to make sure that we don't point outside of the leaf,
|
||||||
|
* just in case all the items are consistent to each other, but
|
||||||
|
* all point outside of the leaf.
|
||||||
|
*/
|
||||||
|
if (btrfs_item_end_nr(leaf, slot) >
|
||||||
|
BTRFS_LEAF_DATA_SIZE(root)) {
|
||||||
|
CORRUPT("slot end outside of leaf", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Also check if the item pointer overlaps with btrfs item. */
|
||||||
|
if (btrfs_item_nr_offset(slot) + sizeof(struct btrfs_item) >
|
||||||
|
btrfs_item_ptr_offset(leaf, slot)) {
|
||||||
|
CORRUPT("slot overlap with its data", leaf, root, slot);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (check_item_data) {
|
||||||
|
/*
|
||||||
|
* Check if the item size and content meet other
|
||||||
|
* criteria
|
||||||
|
*/
|
||||||
|
ret = check_leaf_item(root, leaf, &key, slot);
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
prev_key.objectid = key.objectid;
|
||||||
|
prev_key.type = key.type;
|
||||||
|
prev_key.offset = key.offset;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int btrfs_check_leaf_full(struct btrfs_root *root, struct extent_buffer *leaf)
|
||||||
|
{
|
||||||
|
return check_leaf(root, leaf, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
int btrfs_check_leaf_relaxed(struct btrfs_root *root,
|
||||||
|
struct extent_buffer *leaf)
|
||||||
|
{
|
||||||
|
return check_leaf(root, leaf, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
int btrfs_check_node(struct btrfs_root *root, struct extent_buffer *node)
|
||||||
|
{
|
||||||
|
unsigned long nr = btrfs_header_nritems(node);
|
||||||
|
struct btrfs_key key, next_key;
|
||||||
|
int slot;
|
||||||
|
int level = btrfs_header_level(node);
|
||||||
|
u64 bytenr;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
if (level <= 0 || level >= BTRFS_MAX_LEVEL) {
|
||||||
|
generic_err(root, node, 0,
|
||||||
|
"invalid level for node, have %d expect [1, %d]",
|
||||||
|
level, BTRFS_MAX_LEVEL - 1);
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
if (nr == 0 || nr > BTRFS_NODEPTRS_PER_BLOCK(root)) {
|
||||||
|
btrfs_crit(root->fs_info,
|
||||||
|
"corrupt node: root=%llu block=%llu, nritems too %s, have %lu expect range [1,%u]",
|
||||||
|
root->objectid, node->start,
|
||||||
|
nr == 0 ? "small" : "large", nr,
|
||||||
|
BTRFS_NODEPTRS_PER_BLOCK(root));
|
||||||
|
return -EUCLEAN;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (slot = 0; slot < nr - 1; slot++) {
|
||||||
|
bytenr = btrfs_node_blockptr(node, slot);
|
||||||
|
btrfs_node_key_to_cpu(node, &key, slot);
|
||||||
|
btrfs_node_key_to_cpu(node, &next_key, slot + 1);
|
||||||
|
|
||||||
|
if (!bytenr) {
|
||||||
|
generic_err(root, node, slot,
|
||||||
|
"invalid NULL node pointer");
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
if (!IS_ALIGNED(bytenr, root->sectorsize)) {
|
||||||
|
generic_err(root, node, slot,
|
||||||
|
"unaligned pointer, have %llu should be aligned to %u",
|
||||||
|
bytenr, root->sectorsize);
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (btrfs_comp_cpu_keys(&key, &next_key) >= 0) {
|
||||||
|
generic_err(root, node, slot,
|
||||||
|
"bad key order, current (%llu %u %llu) next (%llu %u %llu)",
|
||||||
|
key.objectid, key.type, key.offset,
|
||||||
|
next_key.objectid, next_key.type,
|
||||||
|
next_key.offset);
|
||||||
|
ret = -EUCLEAN;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out:
|
||||||
|
return ret;
|
||||||
|
}
|
38
fs/btrfs/tree-checker.h
Normal file
38
fs/btrfs/tree-checker.h
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
/*
|
||||||
|
* Copyright (C) Qu Wenruo 2017. All rights reserved.
|
||||||
|
*
|
||||||
|
* This program is free software; you can redistribute it and/or
|
||||||
|
* modify it under the terms of the GNU General Public
|
||||||
|
* License v2 as published by the Free Software Foundation.
|
||||||
|
*
|
||||||
|
* This program is distributed in the hope that it will be useful,
|
||||||
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
* General Public License for more details.
|
||||||
|
*
|
||||||
|
* You should have received a copy of the GNU General Public
|
||||||
|
* License along with this program.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef __BTRFS_TREE_CHECKER__
|
||||||
|
#define __BTRFS_TREE_CHECKER__
|
||||||
|
|
||||||
|
#include "ctree.h"
|
||||||
|
#include "extent_io.h"
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Comprehensive leaf checker.
|
||||||
|
* Will check not only the item pointers, but also every possible member
|
||||||
|
* in item data.
|
||||||
|
*/
|
||||||
|
int btrfs_check_leaf_full(struct btrfs_root *root, struct extent_buffer *leaf);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Less strict leaf checker.
|
||||||
|
* Will only check item pointers, not reading item data.
|
||||||
|
*/
|
||||||
|
int btrfs_check_leaf_relaxed(struct btrfs_root *root,
|
||||||
|
struct extent_buffer *leaf);
|
||||||
|
int btrfs_check_node(struct btrfs_root *root, struct extent_buffer *node);
|
||||||
|
|
||||||
|
#endif
|
|
@ -4656,7 +4656,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
|
||||||
|
|
||||||
if (type & BTRFS_BLOCK_GROUP_DATA) {
|
if (type & BTRFS_BLOCK_GROUP_DATA) {
|
||||||
max_stripe_size = SZ_1G;
|
max_stripe_size = SZ_1G;
|
||||||
max_chunk_size = 10 * max_stripe_size;
|
max_chunk_size = BTRFS_MAX_DATA_CHUNK_SIZE;
|
||||||
if (!devs_max)
|
if (!devs_max)
|
||||||
devs_max = BTRFS_MAX_DEVS(info->chunk_root);
|
devs_max = BTRFS_MAX_DEVS(info->chunk_root);
|
||||||
} else if (type & BTRFS_BLOCK_GROUP_METADATA) {
|
} else if (type & BTRFS_BLOCK_GROUP_METADATA) {
|
||||||
|
@ -6370,6 +6370,8 @@ static int btrfs_check_chunk_valid(struct btrfs_root *root,
|
||||||
u16 num_stripes;
|
u16 num_stripes;
|
||||||
u16 sub_stripes;
|
u16 sub_stripes;
|
||||||
u64 type;
|
u64 type;
|
||||||
|
u64 features;
|
||||||
|
bool mixed = false;
|
||||||
|
|
||||||
length = btrfs_chunk_length(leaf, chunk);
|
length = btrfs_chunk_length(leaf, chunk);
|
||||||
stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
|
stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
|
||||||
|
@ -6410,6 +6412,32 @@ static int btrfs_check_chunk_valid(struct btrfs_root *root,
|
||||||
btrfs_chunk_type(leaf, chunk));
|
btrfs_chunk_type(leaf, chunk));
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ((type & BTRFS_BLOCK_GROUP_TYPE_MASK) == 0) {
|
||||||
|
btrfs_err(root->fs_info, "missing chunk type flag: 0x%llx", type);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ((type & BTRFS_BLOCK_GROUP_SYSTEM) &&
|
||||||
|
(type & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA))) {
|
||||||
|
btrfs_err(root->fs_info,
|
||||||
|
"system chunk with data or metadata type: 0x%llx", type);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
|
features = btrfs_super_incompat_flags(root->fs_info->super_copy);
|
||||||
|
if (features & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS)
|
||||||
|
mixed = true;
|
||||||
|
|
||||||
|
if (!mixed) {
|
||||||
|
if ((type & BTRFS_BLOCK_GROUP_METADATA) &&
|
||||||
|
(type & BTRFS_BLOCK_GROUP_DATA)) {
|
||||||
|
btrfs_err(root->fs_info,
|
||||||
|
"mixed chunk type in non-mixed mode: 0x%llx", type);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
|
if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
|
||||||
(type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
|
(type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
|
||||||
(type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
|
(type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
|
||||||
|
|
|
@ -24,6 +24,8 @@
|
||||||
#include <linux/btrfs.h>
|
#include <linux/btrfs.h>
|
||||||
#include "async-thread.h"
|
#include "async-thread.h"
|
||||||
|
|
||||||
|
#define BTRFS_MAX_DATA_CHUNK_SIZE (10ULL * SZ_1G)
|
||||||
|
|
||||||
extern struct mutex uuid_mutex;
|
extern struct mutex uuid_mutex;
|
||||||
|
|
||||||
#define BTRFS_STRIPE_LEN SZ_64K
|
#define BTRFS_STRIPE_LEN SZ_64K
|
||||||
|
|
|
@ -3983,14 +3983,24 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con,
|
||||||
return auth;
|
return auth;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int add_authorizer_challenge(struct ceph_connection *con,
|
||||||
static int verify_authorizer_reply(struct ceph_connection *con, int len)
|
void *challenge_buf, int challenge_buf_len)
|
||||||
{
|
{
|
||||||
struct ceph_mds_session *s = con->private;
|
struct ceph_mds_session *s = con->private;
|
||||||
struct ceph_mds_client *mdsc = s->s_mdsc;
|
struct ceph_mds_client *mdsc = s->s_mdsc;
|
||||||
struct ceph_auth_client *ac = mdsc->fsc->client->monc.auth;
|
struct ceph_auth_client *ac = mdsc->fsc->client->monc.auth;
|
||||||
|
|
||||||
return ceph_auth_verify_authorizer_reply(ac, s->s_auth.authorizer, len);
|
return ceph_auth_add_authorizer_challenge(ac, s->s_auth.authorizer,
|
||||||
|
challenge_buf, challenge_buf_len);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int verify_authorizer_reply(struct ceph_connection *con)
|
||||||
|
{
|
||||||
|
struct ceph_mds_session *s = con->private;
|
||||||
|
struct ceph_mds_client *mdsc = s->s_mdsc;
|
||||||
|
struct ceph_auth_client *ac = mdsc->fsc->client->monc.auth;
|
||||||
|
|
||||||
|
return ceph_auth_verify_authorizer_reply(ac, s->s_auth.authorizer);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int invalidate_authorizer(struct ceph_connection *con)
|
static int invalidate_authorizer(struct ceph_connection *con)
|
||||||
|
@ -4046,6 +4056,7 @@ static const struct ceph_connection_operations mds_con_ops = {
|
||||||
.put = con_put,
|
.put = con_put,
|
||||||
.dispatch = dispatch,
|
.dispatch = dispatch,
|
||||||
.get_authorizer = get_authorizer,
|
.get_authorizer = get_authorizer,
|
||||||
|
.add_authorizer_challenge = add_authorizer_challenge,
|
||||||
.verify_authorizer_reply = verify_authorizer_reply,
|
.verify_authorizer_reply = verify_authorizer_reply,
|
||||||
.invalidate_authorizer = invalidate_authorizer,
|
.invalidate_authorizer = invalidate_authorizer,
|
||||||
.peer_reset = peer_reset,
|
.peer_reset = peer_reset,
|
||||||
|
|
|
@ -118,6 +118,16 @@ static void huge_pagevec_release(struct pagevec *pvec)
|
||||||
pagevec_reinit(pvec);
|
pagevec_reinit(pvec);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Mask used when checking the page offset value passed in via system
|
||||||
|
* calls. This value will be converted to a loff_t which is signed.
|
||||||
|
* Therefore, we want to check the upper PAGE_SHIFT + 1 bits of the
|
||||||
|
* value. The extra bit (- 1 in the shift value) is to take the sign
|
||||||
|
* bit into account.
|
||||||
|
*/
|
||||||
|
#define PGOFF_LOFFT_MAX \
|
||||||
|
(((1UL << (PAGE_SHIFT + 1)) - 1) << (BITS_PER_LONG - (PAGE_SHIFT + 1)))
|
||||||
|
|
||||||
static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
||||||
{
|
{
|
||||||
struct inode *inode = file_inode(file);
|
struct inode *inode = file_inode(file);
|
||||||
|
@ -136,17 +146,31 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
||||||
vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
|
vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
|
||||||
vma->vm_ops = &hugetlb_vm_ops;
|
vma->vm_ops = &hugetlb_vm_ops;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* page based offset in vm_pgoff could be sufficiently large to
|
||||||
|
* overflow a loff_t when converted to byte offset. This can
|
||||||
|
* only happen on architectures where sizeof(loff_t) ==
|
||||||
|
* sizeof(unsigned long). So, only check in those instances.
|
||||||
|
*/
|
||||||
|
if (sizeof(unsigned long) == sizeof(loff_t)) {
|
||||||
|
if (vma->vm_pgoff & PGOFF_LOFFT_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* must be huge page aligned */
|
||||||
if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
|
if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
vma_len = (loff_t)(vma->vm_end - vma->vm_start);
|
vma_len = (loff_t)(vma->vm_end - vma->vm_start);
|
||||||
|
len = vma_len + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
|
||||||
|
/* check for overflow */
|
||||||
|
if (len < vma_len)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
inode_lock(inode);
|
inode_lock(inode);
|
||||||
file_accessed(file);
|
file_accessed(file);
|
||||||
|
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
len = vma_len + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
|
|
||||||
|
|
||||||
if (hugetlb_reserve_pages(inode,
|
if (hugetlb_reserve_pages(inode,
|
||||||
vma->vm_pgoff >> huge_page_order(h),
|
vma->vm_pgoff >> huge_page_order(h),
|
||||||
len >> huge_page_shift(h), vma,
|
len >> huge_page_shift(h), vma,
|
||||||
|
@ -155,7 +179,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
||||||
|
|
||||||
ret = 0;
|
ret = 0;
|
||||||
if (vma->vm_flags & VM_WRITE && inode->i_size < len)
|
if (vma->vm_flags & VM_WRITE && inode->i_size < len)
|
||||||
inode->i_size = len;
|
i_size_write(inode, len);
|
||||||
out:
|
out:
|
||||||
inode_unlock(inode);
|
inode_unlock(inode);
|
||||||
|
|
||||||
|
|
|
@ -88,7 +88,7 @@ static int kernfs_get_target_path(struct kernfs_node *parent,
|
||||||
int slen = strlen(kn->name);
|
int slen = strlen(kn->name);
|
||||||
|
|
||||||
len -= slen;
|
len -= slen;
|
||||||
strncpy(s + len, kn->name, slen);
|
memcpy(s + len, kn->name, slen);
|
||||||
if (len)
|
if (len)
|
||||||
s[--len] = '/';
|
s[--len] = '/';
|
||||||
|
|
||||||
|
|
|
@ -929,16 +929,20 @@ static int udf_load_pvoldesc(struct super_block *sb, sector_t block)
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = udf_dstrCS0toUTF8(outstr, 31, pvoldesc->volIdent, 32);
|
ret = udf_dstrCS0toUTF8(outstr, 31, pvoldesc->volIdent, 32);
|
||||||
if (ret < 0)
|
if (ret < 0) {
|
||||||
goto out_bh;
|
strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName");
|
||||||
|
pr_warn("incorrect volume identification, setting to "
|
||||||
strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);
|
"'InvalidName'\n");
|
||||||
|
} else {
|
||||||
|
strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);
|
||||||
|
}
|
||||||
udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);
|
udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);
|
||||||
|
|
||||||
ret = udf_dstrCS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128);
|
ret = udf_dstrCS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128);
|
||||||
if (ret < 0)
|
if (ret < 0) {
|
||||||
|
ret = 0;
|
||||||
goto out_bh;
|
goto out_bh;
|
||||||
|
}
|
||||||
outstr[ret] = 0;
|
outstr[ret] = 0;
|
||||||
udf_debug("volSetIdent[] = '%s'\n", outstr);
|
udf_debug("volSetIdent[] = '%s'\n", outstr);
|
||||||
|
|
||||||
|
|
|
@ -341,6 +341,11 @@ try_again:
|
||||||
return u_len;
|
return u_len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Convert CS0 dstring to output charset. Warning: This function may truncate
|
||||||
|
* input string if it is too long as it is used for informational strings only
|
||||||
|
* and it is better to truncate the string than to refuse mounting a media.
|
||||||
|
*/
|
||||||
int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len,
|
int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len,
|
||||||
const uint8_t *ocu_i, int i_len)
|
const uint8_t *ocu_i, int i_len)
|
||||||
{
|
{
|
||||||
|
@ -349,9 +354,12 @@ int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len,
|
||||||
if (i_len > 0) {
|
if (i_len > 0) {
|
||||||
s_len = ocu_i[i_len - 1];
|
s_len = ocu_i[i_len - 1];
|
||||||
if (s_len >= i_len) {
|
if (s_len >= i_len) {
|
||||||
pr_err("incorrect dstring lengths (%d/%d)\n",
|
pr_warn("incorrect dstring lengths (%d/%d),"
|
||||||
s_len, i_len);
|
" truncating\n", s_len, i_len);
|
||||||
return -EINVAL;
|
s_len = i_len - 1;
|
||||||
|
/* 2-byte encoding? Need to round properly... */
|
||||||
|
if (ocu_i[0] == 16)
|
||||||
|
s_len -= (s_len - 1) & 2;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -487,7 +487,14 @@ xfs_attr_shortform_addname(xfs_da_args_t *args)
|
||||||
if (args->flags & ATTR_CREATE)
|
if (args->flags & ATTR_CREATE)
|
||||||
return retval;
|
return retval;
|
||||||
retval = xfs_attr_shortform_remove(args);
|
retval = xfs_attr_shortform_remove(args);
|
||||||
ASSERT(retval == 0);
|
if (retval)
|
||||||
|
return retval;
|
||||||
|
/*
|
||||||
|
* Since we have removed the old attr, clear ATTR_REPLACE so
|
||||||
|
* that the leaf format add routine won't trip over the attr
|
||||||
|
* not being around.
|
||||||
|
*/
|
||||||
|
args->flags &= ~ATTR_REPLACE;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
|
if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
|
||||||
|
|
|
@ -71,6 +71,7 @@ struct bpf_insn_aux_data {
|
||||||
enum bpf_reg_type ptr_type; /* pointer type for load/store insns */
|
enum bpf_reg_type ptr_type; /* pointer type for load/store insns */
|
||||||
struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
|
struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
|
||||||
};
|
};
|
||||||
|
int sanitize_stack_off; /* stack slot to be cleared */
|
||||||
bool seen; /* this insn was processed by the verifier */
|
bool seen; /* this insn was processed by the verifier */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -63,8 +63,12 @@ struct ceph_auth_client_ops {
|
||||||
/* ensure that an existing authorizer is up to date */
|
/* ensure that an existing authorizer is up to date */
|
||||||
int (*update_authorizer)(struct ceph_auth_client *ac, int peer_type,
|
int (*update_authorizer)(struct ceph_auth_client *ac, int peer_type,
|
||||||
struct ceph_auth_handshake *auth);
|
struct ceph_auth_handshake *auth);
|
||||||
|
int (*add_authorizer_challenge)(struct ceph_auth_client *ac,
|
||||||
|
struct ceph_authorizer *a,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len);
|
||||||
int (*verify_authorizer_reply)(struct ceph_auth_client *ac,
|
int (*verify_authorizer_reply)(struct ceph_auth_client *ac,
|
||||||
struct ceph_authorizer *a, size_t len);
|
struct ceph_authorizer *a);
|
||||||
void (*invalidate_authorizer)(struct ceph_auth_client *ac,
|
void (*invalidate_authorizer)(struct ceph_auth_client *ac,
|
||||||
int peer_type);
|
int peer_type);
|
||||||
|
|
||||||
|
@ -117,9 +121,12 @@ void ceph_auth_destroy_authorizer(struct ceph_authorizer *a);
|
||||||
extern int ceph_auth_update_authorizer(struct ceph_auth_client *ac,
|
extern int ceph_auth_update_authorizer(struct ceph_auth_client *ac,
|
||||||
int peer_type,
|
int peer_type,
|
||||||
struct ceph_auth_handshake *a);
|
struct ceph_auth_handshake *a);
|
||||||
|
int ceph_auth_add_authorizer_challenge(struct ceph_auth_client *ac,
|
||||||
|
struct ceph_authorizer *a,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len);
|
||||||
extern int ceph_auth_verify_authorizer_reply(struct ceph_auth_client *ac,
|
extern int ceph_auth_verify_authorizer_reply(struct ceph_auth_client *ac,
|
||||||
struct ceph_authorizer *a,
|
struct ceph_authorizer *a);
|
||||||
size_t len);
|
|
||||||
extern void ceph_auth_invalidate_authorizer(struct ceph_auth_client *ac,
|
extern void ceph_auth_invalidate_authorizer(struct ceph_auth_client *ac,
|
||||||
int peer_type);
|
int peer_type);
|
||||||
|
|
||||||
|
|
|
@ -76,6 +76,7 @@
|
||||||
// duplicated since it was introduced at the same time as CEPH_FEATURE_CRUSH_TUNABLES5
|
// duplicated since it was introduced at the same time as CEPH_FEATURE_CRUSH_TUNABLES5
|
||||||
#define CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING (1ULL<<58) /* New, v7 encoding */
|
#define CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING (1ULL<<58) /* New, v7 encoding */
|
||||||
#define CEPH_FEATURE_FS_FILE_LAYOUT_V2 (1ULL<<58) /* file_layout_t */
|
#define CEPH_FEATURE_FS_FILE_LAYOUT_V2 (1ULL<<58) /* file_layout_t */
|
||||||
|
#define CEPH_FEATURE_CEPHX_V2 (1ULL<<61) // *do not share this bit*
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The introduction of CEPH_FEATURE_OSD_SNAPMAPPER caused the feature
|
* The introduction of CEPH_FEATURE_OSD_SNAPMAPPER caused the feature
|
||||||
|
@ -124,7 +125,8 @@ static inline u64 ceph_sanitize_features(u64 features)
|
||||||
CEPH_FEATURE_MSGR_KEEPALIVE2 | \
|
CEPH_FEATURE_MSGR_KEEPALIVE2 | \
|
||||||
CEPH_FEATURE_CRUSH_V4 | \
|
CEPH_FEATURE_CRUSH_V4 | \
|
||||||
CEPH_FEATURE_CRUSH_TUNABLES5 | \
|
CEPH_FEATURE_CRUSH_TUNABLES5 | \
|
||||||
CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING)
|
CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING | \
|
||||||
|
CEPH_FEATURE_CEPHX_V2)
|
||||||
|
|
||||||
#define CEPH_FEATURES_REQUIRED_DEFAULT \
|
#define CEPH_FEATURES_REQUIRED_DEFAULT \
|
||||||
(CEPH_FEATURE_NOSRCADDR | \
|
(CEPH_FEATURE_NOSRCADDR | \
|
||||||
|
|
|
@ -30,7 +30,10 @@ struct ceph_connection_operations {
|
||||||
struct ceph_auth_handshake *(*get_authorizer) (
|
struct ceph_auth_handshake *(*get_authorizer) (
|
||||||
struct ceph_connection *con,
|
struct ceph_connection *con,
|
||||||
int *proto, int force_new);
|
int *proto, int force_new);
|
||||||
int (*verify_authorizer_reply) (struct ceph_connection *con, int len);
|
int (*add_authorizer_challenge)(struct ceph_connection *con,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len);
|
||||||
|
int (*verify_authorizer_reply) (struct ceph_connection *con);
|
||||||
int (*invalidate_authorizer)(struct ceph_connection *con);
|
int (*invalidate_authorizer)(struct ceph_connection *con);
|
||||||
|
|
||||||
/* there was some error on the socket (disconnect, whatever) */
|
/* there was some error on the socket (disconnect, whatever) */
|
||||||
|
@ -200,9 +203,8 @@ struct ceph_connection {
|
||||||
attempt for this connection, client */
|
attempt for this connection, client */
|
||||||
u32 peer_global_seq; /* peer's global seq for this connection */
|
u32 peer_global_seq; /* peer's global seq for this connection */
|
||||||
|
|
||||||
|
struct ceph_auth_handshake *auth;
|
||||||
int auth_retry; /* true if we need a newer authorizer */
|
int auth_retry; /* true if we need a newer authorizer */
|
||||||
void *auth_reply_buf; /* where to put the authorizer reply */
|
|
||||||
int auth_reply_buf_len;
|
|
||||||
|
|
||||||
struct mutex mutex;
|
struct mutex mutex;
|
||||||
|
|
||||||
|
|
|
@ -90,7 +90,7 @@ struct ceph_entity_inst {
|
||||||
#define CEPH_MSGR_TAG_SEQ 13 /* 64-bit int follows with seen seq number */
|
#define CEPH_MSGR_TAG_SEQ 13 /* 64-bit int follows with seen seq number */
|
||||||
#define CEPH_MSGR_TAG_KEEPALIVE2 14 /* keepalive2 byte + ceph_timespec */
|
#define CEPH_MSGR_TAG_KEEPALIVE2 14 /* keepalive2 byte + ceph_timespec */
|
||||||
#define CEPH_MSGR_TAG_KEEPALIVE2_ACK 15 /* keepalive2 reply */
|
#define CEPH_MSGR_TAG_KEEPALIVE2_ACK 15 /* keepalive2 reply */
|
||||||
|
#define CEPH_MSGR_TAG_CHALLENGE_AUTHORIZER 16 /* cephx v2 doing server challenge */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* connection negotiation
|
* connection negotiation
|
||||||
|
|
|
@ -13,76 +13,82 @@ int reset_control_deassert(struct reset_control *rstc);
|
||||||
int reset_control_status(struct reset_control *rstc);
|
int reset_control_status(struct reset_control *rstc);
|
||||||
|
|
||||||
struct reset_control *__of_reset_control_get(struct device_node *node,
|
struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||||
const char *id, int index, int shared);
|
const char *id, int index, bool shared,
|
||||||
|
bool optional);
|
||||||
|
struct reset_control *__reset_control_get(struct device *dev, const char *id,
|
||||||
|
int index, bool shared,
|
||||||
|
bool optional);
|
||||||
void reset_control_put(struct reset_control *rstc);
|
void reset_control_put(struct reset_control *rstc);
|
||||||
|
int __device_reset(struct device *dev, bool optional);
|
||||||
struct reset_control *__devm_reset_control_get(struct device *dev,
|
struct reset_control *__devm_reset_control_get(struct device *dev,
|
||||||
const char *id, int index, int shared);
|
const char *id, int index, bool shared,
|
||||||
|
bool optional);
|
||||||
int __must_check device_reset(struct device *dev);
|
|
||||||
|
|
||||||
static inline int device_reset_optional(struct device *dev)
|
|
||||||
{
|
|
||||||
return device_reset(dev);
|
|
||||||
}
|
|
||||||
|
|
||||||
#else
|
#else
|
||||||
|
|
||||||
static inline int reset_control_reset(struct reset_control *rstc)
|
static inline int reset_control_reset(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int reset_control_assert(struct reset_control *rstc)
|
static inline int reset_control_assert(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int reset_control_deassert(struct reset_control *rstc)
|
static inline int reset_control_deassert(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int reset_control_status(struct reset_control *rstc)
|
static inline int reset_control_status(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void reset_control_put(struct reset_control *rstc)
|
static inline void reset_control_put(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int __must_check device_reset(struct device *dev)
|
static inline int __device_reset(struct device *dev, bool optional)
|
||||||
{
|
{
|
||||||
WARN_ON(1);
|
return optional ? 0 : -ENOTSUPP;
|
||||||
return -ENOTSUPP;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int device_reset_optional(struct device *dev)
|
|
||||||
{
|
|
||||||
return -ENOTSUPP;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *__of_reset_control_get(
|
static inline struct reset_control *__of_reset_control_get(
|
||||||
struct device_node *node,
|
struct device_node *node,
|
||||||
const char *id, int index, int shared)
|
const char *id, int index, bool shared,
|
||||||
|
bool optional)
|
||||||
{
|
{
|
||||||
return ERR_PTR(-ENOTSUPP);
|
return optional ? NULL : ERR_PTR(-ENOTSUPP);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline struct reset_control *__reset_control_get(
|
||||||
|
struct device *dev, const char *id,
|
||||||
|
int index, bool shared, bool optional)
|
||||||
|
{
|
||||||
|
return optional ? NULL : ERR_PTR(-ENOTSUPP);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *__devm_reset_control_get(
|
static inline struct reset_control *__devm_reset_control_get(
|
||||||
struct device *dev,
|
struct device *dev, const char *id,
|
||||||
const char *id, int index, int shared)
|
int index, bool shared, bool optional)
|
||||||
{
|
{
|
||||||
return ERR_PTR(-ENOTSUPP);
|
return optional ? NULL : ERR_PTR(-ENOTSUPP);
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif /* CONFIG_RESET_CONTROLLER */
|
#endif /* CONFIG_RESET_CONTROLLER */
|
||||||
|
|
||||||
|
static inline int __must_check device_reset(struct device *dev)
|
||||||
|
{
|
||||||
|
return __device_reset(dev, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int device_reset_optional(struct device *dev)
|
||||||
|
{
|
||||||
|
return __device_reset(dev, true);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* reset_control_get_exclusive - Lookup and obtain an exclusive reference
|
* reset_control_get_exclusive - Lookup and obtain an exclusive reference
|
||||||
* to a reset controller.
|
* to a reset controller.
|
||||||
|
@ -101,10 +107,7 @@ static inline struct reset_control *__devm_reset_control_get(
|
||||||
static inline struct reset_control *
|
static inline struct reset_control *
|
||||||
__must_check reset_control_get_exclusive(struct device *dev, const char *id)
|
__must_check reset_control_get_exclusive(struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
#ifndef CONFIG_RESET_CONTROLLER
|
return __reset_control_get(dev, id, 0, false, false);
|
||||||
WARN_ON(1);
|
|
||||||
#endif
|
|
||||||
return __of_reset_control_get(dev ? dev->of_node : NULL, id, 0, 0);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -132,19 +135,19 @@ __must_check reset_control_get_exclusive(struct device *dev, const char *id)
|
||||||
static inline struct reset_control *reset_control_get_shared(
|
static inline struct reset_control *reset_control_get_shared(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(dev ? dev->of_node : NULL, id, 0, 1);
|
return __reset_control_get(dev, id, 0, true, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *reset_control_get_optional_exclusive(
|
static inline struct reset_control *reset_control_get_optional_exclusive(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(dev ? dev->of_node : NULL, id, 0, 0);
|
return __reset_control_get(dev, id, 0, false, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *reset_control_get_optional_shared(
|
static inline struct reset_control *reset_control_get_optional_shared(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(dev ? dev->of_node : NULL, id, 0, 1);
|
return __reset_control_get(dev, id, 0, true, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -160,7 +163,7 @@ static inline struct reset_control *reset_control_get_optional_shared(
|
||||||
static inline struct reset_control *of_reset_control_get_exclusive(
|
static inline struct reset_control *of_reset_control_get_exclusive(
|
||||||
struct device_node *node, const char *id)
|
struct device_node *node, const char *id)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(node, id, 0, 0);
|
return __of_reset_control_get(node, id, 0, false, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -185,7 +188,7 @@ static inline struct reset_control *of_reset_control_get_exclusive(
|
||||||
static inline struct reset_control *of_reset_control_get_shared(
|
static inline struct reset_control *of_reset_control_get_shared(
|
||||||
struct device_node *node, const char *id)
|
struct device_node *node, const char *id)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(node, id, 0, 1);
|
return __of_reset_control_get(node, id, 0, true, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -202,7 +205,7 @@ static inline struct reset_control *of_reset_control_get_shared(
|
||||||
static inline struct reset_control *of_reset_control_get_exclusive_by_index(
|
static inline struct reset_control *of_reset_control_get_exclusive_by_index(
|
||||||
struct device_node *node, int index)
|
struct device_node *node, int index)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(node, NULL, index, 0);
|
return __of_reset_control_get(node, NULL, index, false, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -230,7 +233,7 @@ static inline struct reset_control *of_reset_control_get_exclusive_by_index(
|
||||||
static inline struct reset_control *of_reset_control_get_shared_by_index(
|
static inline struct reset_control *of_reset_control_get_shared_by_index(
|
||||||
struct device_node *node, int index)
|
struct device_node *node, int index)
|
||||||
{
|
{
|
||||||
return __of_reset_control_get(node, NULL, index, 1);
|
return __of_reset_control_get(node, NULL, index, true, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -249,10 +252,7 @@ static inline struct reset_control *
|
||||||
__must_check devm_reset_control_get_exclusive(struct device *dev,
|
__must_check devm_reset_control_get_exclusive(struct device *dev,
|
||||||
const char *id)
|
const char *id)
|
||||||
{
|
{
|
||||||
#ifndef CONFIG_RESET_CONTROLLER
|
return __devm_reset_control_get(dev, id, 0, false, false);
|
||||||
WARN_ON(1);
|
|
||||||
#endif
|
|
||||||
return __devm_reset_control_get(dev, id, 0, 0);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -267,19 +267,19 @@ __must_check devm_reset_control_get_exclusive(struct device *dev,
|
||||||
static inline struct reset_control *devm_reset_control_get_shared(
|
static inline struct reset_control *devm_reset_control_get_shared(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __devm_reset_control_get(dev, id, 0, 1);
|
return __devm_reset_control_get(dev, id, 0, true, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *devm_reset_control_get_optional_exclusive(
|
static inline struct reset_control *devm_reset_control_get_optional_exclusive(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __devm_reset_control_get(dev, id, 0, 0);
|
return __devm_reset_control_get(dev, id, 0, false, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct reset_control *devm_reset_control_get_optional_shared(
|
static inline struct reset_control *devm_reset_control_get_optional_shared(
|
||||||
struct device *dev, const char *id)
|
struct device *dev, const char *id)
|
||||||
{
|
{
|
||||||
return __devm_reset_control_get(dev, id, 0, 1);
|
return __devm_reset_control_get(dev, id, 0, true, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -297,7 +297,7 @@ static inline struct reset_control *devm_reset_control_get_optional_shared(
|
||||||
static inline struct reset_control *
|
static inline struct reset_control *
|
||||||
devm_reset_control_get_exclusive_by_index(struct device *dev, int index)
|
devm_reset_control_get_exclusive_by_index(struct device *dev, int index)
|
||||||
{
|
{
|
||||||
return __devm_reset_control_get(dev, NULL, index, 0);
|
return __devm_reset_control_get(dev, NULL, index, false, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -313,7 +313,7 @@ devm_reset_control_get_exclusive_by_index(struct device *dev, int index)
|
||||||
static inline struct reset_control *
|
static inline struct reset_control *
|
||||||
devm_reset_control_get_shared_by_index(struct device *dev, int index)
|
devm_reset_control_get_shared_by_index(struct device *dev, int index)
|
||||||
{
|
{
|
||||||
return __devm_reset_control_get(dev, NULL, index, 1);
|
return __devm_reset_control_get(dev, NULL, index, true, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -730,6 +730,7 @@ struct btrfs_balance_item {
|
||||||
#define BTRFS_FILE_EXTENT_INLINE 0
|
#define BTRFS_FILE_EXTENT_INLINE 0
|
||||||
#define BTRFS_FILE_EXTENT_REG 1
|
#define BTRFS_FILE_EXTENT_REG 1
|
||||||
#define BTRFS_FILE_EXTENT_PREALLOC 2
|
#define BTRFS_FILE_EXTENT_PREALLOC 2
|
||||||
|
#define BTRFS_FILE_EXTENT_TYPES 2
|
||||||
|
|
||||||
struct btrfs_file_extent_item {
|
struct btrfs_file_extent_item {
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -540,10 +540,11 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
|
||||||
/* check_stack_read/write functions track spill/fill of registers,
|
/* check_stack_read/write functions track spill/fill of registers,
|
||||||
* stack boundary and alignment are checked in check_mem_access()
|
* stack boundary and alignment are checked in check_mem_access()
|
||||||
*/
|
*/
|
||||||
static int check_stack_write(struct bpf_verifier_state *state, int off,
|
static int check_stack_write(struct bpf_verifier_env *env,
|
||||||
int size, int value_regno)
|
struct bpf_verifier_state *state, int off,
|
||||||
|
int size, int value_regno, int insn_idx)
|
||||||
{
|
{
|
||||||
int i;
|
int i, spi = (MAX_BPF_STACK + off) / BPF_REG_SIZE;
|
||||||
/* caller checked that off % size == 0 and -MAX_BPF_STACK <= off < 0,
|
/* caller checked that off % size == 0 and -MAX_BPF_STACK <= off < 0,
|
||||||
* so it's aligned access and [off, off + size) are within stack limits
|
* so it's aligned access and [off, off + size) are within stack limits
|
||||||
*/
|
*/
|
||||||
|
@ -558,15 +559,37 @@ static int check_stack_write(struct bpf_verifier_state *state, int off,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* save register state */
|
/* save register state */
|
||||||
state->spilled_regs[(MAX_BPF_STACK + off) / BPF_REG_SIZE] =
|
state->spilled_regs[spi] = state->regs[value_regno];
|
||||||
state->regs[value_regno];
|
|
||||||
|
|
||||||
for (i = 0; i < BPF_REG_SIZE; i++)
|
for (i = 0; i < BPF_REG_SIZE; i++) {
|
||||||
|
if (state->stack_slot_type[MAX_BPF_STACK + off + i] == STACK_MISC &&
|
||||||
|
!env->allow_ptr_leaks) {
|
||||||
|
int *poff = &env->insn_aux_data[insn_idx].sanitize_stack_off;
|
||||||
|
int soff = (-spi - 1) * BPF_REG_SIZE;
|
||||||
|
|
||||||
|
/* detected reuse of integer stack slot with a pointer
|
||||||
|
* which means either llvm is reusing stack slot or
|
||||||
|
* an attacker is trying to exploit CVE-2018-3639
|
||||||
|
* (speculative store bypass)
|
||||||
|
* Have to sanitize that slot with preemptive
|
||||||
|
* store of zero.
|
||||||
|
*/
|
||||||
|
if (*poff && *poff != soff) {
|
||||||
|
/* disallow programs where single insn stores
|
||||||
|
* into two different stack slots, since verifier
|
||||||
|
* cannot sanitize them
|
||||||
|
*/
|
||||||
|
verbose("insn %d cannot access two stack slots fp%d and fp%d",
|
||||||
|
insn_idx, *poff, soff);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
*poff = soff;
|
||||||
|
}
|
||||||
state->stack_slot_type[MAX_BPF_STACK + off + i] = STACK_SPILL;
|
state->stack_slot_type[MAX_BPF_STACK + off + i] = STACK_SPILL;
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
/* regular write of data into stack */
|
/* regular write of data into stack */
|
||||||
state->spilled_regs[(MAX_BPF_STACK + off) / BPF_REG_SIZE] =
|
state->spilled_regs[spi] = (struct bpf_reg_state) {};
|
||||||
(struct bpf_reg_state) {};
|
|
||||||
|
|
||||||
for (i = 0; i < size; i++)
|
for (i = 0; i < size; i++)
|
||||||
state->stack_slot_type[MAX_BPF_STACK + off + i] = STACK_MISC;
|
state->stack_slot_type[MAX_BPF_STACK + off + i] = STACK_MISC;
|
||||||
|
@ -747,7 +770,7 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
|
||||||
* if t==write && value_regno==-1, some unknown value is stored into memory
|
* if t==write && value_regno==-1, some unknown value is stored into memory
|
||||||
* if t==read && value_regno==-1, don't care what we read from memory
|
* if t==read && value_regno==-1, don't care what we read from memory
|
||||||
*/
|
*/
|
||||||
static int check_mem_access(struct bpf_verifier_env *env, u32 regno, int off,
|
static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno, int off,
|
||||||
int bpf_size, enum bpf_access_type t,
|
int bpf_size, enum bpf_access_type t,
|
||||||
int value_regno)
|
int value_regno)
|
||||||
{
|
{
|
||||||
|
@ -843,7 +866,8 @@ static int check_mem_access(struct bpf_verifier_env *env, u32 regno, int off,
|
||||||
verbose("attempt to corrupt spilled pointer on stack\n");
|
verbose("attempt to corrupt spilled pointer on stack\n");
|
||||||
return -EACCES;
|
return -EACCES;
|
||||||
}
|
}
|
||||||
err = check_stack_write(state, off, size, value_regno);
|
err = check_stack_write(env, state, off, size,
|
||||||
|
value_regno, insn_idx);
|
||||||
} else {
|
} else {
|
||||||
err = check_stack_read(state, off, size, value_regno);
|
err = check_stack_read(state, off, size, value_regno);
|
||||||
}
|
}
|
||||||
|
@ -877,7 +901,7 @@ static int check_mem_access(struct bpf_verifier_env *env, u32 regno, int off,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int check_xadd(struct bpf_verifier_env *env, struct bpf_insn *insn)
|
static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
|
||||||
{
|
{
|
||||||
struct bpf_reg_state *regs = env->cur_state.regs;
|
struct bpf_reg_state *regs = env->cur_state.regs;
|
||||||
int err;
|
int err;
|
||||||
|
@ -910,13 +934,13 @@ static int check_xadd(struct bpf_verifier_env *env, struct bpf_insn *insn)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* check whether atomic_add can read the memory */
|
/* check whether atomic_add can read the memory */
|
||||||
err = check_mem_access(env, insn->dst_reg, insn->off,
|
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
|
||||||
BPF_SIZE(insn->code), BPF_READ, -1);
|
BPF_SIZE(insn->code), BPF_READ, -1);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
/* check whether atomic_add can write into the same memory */
|
/* check whether atomic_add can write into the same memory */
|
||||||
return check_mem_access(env, insn->dst_reg, insn->off,
|
return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
|
||||||
BPF_SIZE(insn->code), BPF_WRITE, -1);
|
BPF_SIZE(insn->code), BPF_WRITE, -1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1272,7 +1296,7 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
|
||||||
* is inferred from register state.
|
* is inferred from register state.
|
||||||
*/
|
*/
|
||||||
for (i = 0; i < meta.access_size; i++) {
|
for (i = 0; i < meta.access_size; i++) {
|
||||||
err = check_mem_access(env, meta.regno, i, BPF_B, BPF_WRITE, -1);
|
err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B, BPF_WRITE, -1);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
@ -2938,7 +2962,7 @@ static int do_check(struct bpf_verifier_env *env)
|
||||||
/* check that memory (src_reg + off) is readable,
|
/* check that memory (src_reg + off) is readable,
|
||||||
* the state of dst_reg will be updated by this func
|
* the state of dst_reg will be updated by this func
|
||||||
*/
|
*/
|
||||||
err = check_mem_access(env, insn->src_reg, insn->off,
|
err = check_mem_access(env, insn_idx, insn->src_reg, insn->off,
|
||||||
BPF_SIZE(insn->code), BPF_READ,
|
BPF_SIZE(insn->code), BPF_READ,
|
||||||
insn->dst_reg);
|
insn->dst_reg);
|
||||||
if (err)
|
if (err)
|
||||||
|
@ -2978,7 +3002,7 @@ static int do_check(struct bpf_verifier_env *env)
|
||||||
enum bpf_reg_type *prev_dst_type, dst_reg_type;
|
enum bpf_reg_type *prev_dst_type, dst_reg_type;
|
||||||
|
|
||||||
if (BPF_MODE(insn->code) == BPF_XADD) {
|
if (BPF_MODE(insn->code) == BPF_XADD) {
|
||||||
err = check_xadd(env, insn);
|
err = check_xadd(env, insn_idx, insn);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
insn_idx++;
|
insn_idx++;
|
||||||
|
@ -2997,7 +3021,7 @@ static int do_check(struct bpf_verifier_env *env)
|
||||||
dst_reg_type = regs[insn->dst_reg].type;
|
dst_reg_type = regs[insn->dst_reg].type;
|
||||||
|
|
||||||
/* check that memory (dst_reg + off) is writeable */
|
/* check that memory (dst_reg + off) is writeable */
|
||||||
err = check_mem_access(env, insn->dst_reg, insn->off,
|
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
|
||||||
BPF_SIZE(insn->code), BPF_WRITE,
|
BPF_SIZE(insn->code), BPF_WRITE,
|
||||||
insn->src_reg);
|
insn->src_reg);
|
||||||
if (err)
|
if (err)
|
||||||
|
@ -3032,7 +3056,7 @@ static int do_check(struct bpf_verifier_env *env)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* check that memory (dst_reg + off) is writeable */
|
/* check that memory (dst_reg + off) is writeable */
|
||||||
err = check_mem_access(env, insn->dst_reg, insn->off,
|
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
|
||||||
BPF_SIZE(insn->code), BPF_WRITE,
|
BPF_SIZE(insn->code), BPF_WRITE,
|
||||||
-1);
|
-1);
|
||||||
if (err)
|
if (err)
|
||||||
|
@ -3369,6 +3393,34 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
|
||||||
else
|
else
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
if (type == BPF_WRITE &&
|
||||||
|
env->insn_aux_data[i + delta].sanitize_stack_off) {
|
||||||
|
struct bpf_insn patch[] = {
|
||||||
|
/* Sanitize suspicious stack slot with zero.
|
||||||
|
* There are no memory dependencies for this store,
|
||||||
|
* since it's only using frame pointer and immediate
|
||||||
|
* constant of zero
|
||||||
|
*/
|
||||||
|
BPF_ST_MEM(BPF_DW, BPF_REG_FP,
|
||||||
|
env->insn_aux_data[i + delta].sanitize_stack_off,
|
||||||
|
0),
|
||||||
|
/* the original STX instruction will immediately
|
||||||
|
* overwrite the same stack slot with appropriate value
|
||||||
|
*/
|
||||||
|
*insn,
|
||||||
|
};
|
||||||
|
|
||||||
|
cnt = ARRAY_SIZE(patch);
|
||||||
|
new_prog = bpf_patch_insn_data(env, i + delta, patch, cnt);
|
||||||
|
if (!new_prog)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
delta += cnt - 1;
|
||||||
|
env->prog = new_prog;
|
||||||
|
insn = new_prog->insnsi + i + delta;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
if (env->insn_aux_data[i + delta].ptr_type != PTR_TO_CTX)
|
if (env->insn_aux_data[i + delta].ptr_type != PTR_TO_CTX)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
|
|
@ -129,13 +129,13 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
|
||||||
}
|
}
|
||||||
if (i >= ARRAY_SIZE(kdb_name_table)) {
|
if (i >= ARRAY_SIZE(kdb_name_table)) {
|
||||||
debug_kfree(kdb_name_table[0]);
|
debug_kfree(kdb_name_table[0]);
|
||||||
memcpy(kdb_name_table, kdb_name_table+1,
|
memmove(kdb_name_table, kdb_name_table+1,
|
||||||
sizeof(kdb_name_table[0]) *
|
sizeof(kdb_name_table[0]) *
|
||||||
(ARRAY_SIZE(kdb_name_table)-1));
|
(ARRAY_SIZE(kdb_name_table)-1));
|
||||||
} else {
|
} else {
|
||||||
debug_kfree(knt1);
|
debug_kfree(knt1);
|
||||||
knt1 = kdb_name_table[i];
|
knt1 = kdb_name_table[i];
|
||||||
memcpy(kdb_name_table+i, kdb_name_table+i+1,
|
memmove(kdb_name_table+i, kdb_name_table+i+1,
|
||||||
sizeof(kdb_name_table[0]) *
|
sizeof(kdb_name_table[0]) *
|
||||||
(ARRAY_SIZE(kdb_name_table)-i-1));
|
(ARRAY_SIZE(kdb_name_table)-i-1));
|
||||||
}
|
}
|
||||||
|
|
|
@ -608,7 +608,7 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
|
||||||
BUG_ON((uprobe->offset & ~PAGE_MASK) +
|
BUG_ON((uprobe->offset & ~PAGE_MASK) +
|
||||||
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
|
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
|
||||||
|
|
||||||
smp_wmb(); /* pairs with rmb() in find_active_uprobe() */
|
smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
|
||||||
set_bit(UPROBE_COPY_INSN, &uprobe->flags);
|
set_bit(UPROBE_COPY_INSN, &uprobe->flags);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
@ -1902,10 +1902,18 @@ static void handle_swbp(struct pt_regs *regs)
|
||||||
* After we hit the bp, _unregister + _register can install the
|
* After we hit the bp, _unregister + _register can install the
|
||||||
* new and not-yet-analyzed uprobe at the same address, restart.
|
* new and not-yet-analyzed uprobe at the same address, restart.
|
||||||
*/
|
*/
|
||||||
smp_rmb(); /* pairs with wmb() in install_breakpoint() */
|
|
||||||
if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))
|
if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Pairs with the smp_wmb() in prepare_uprobe().
|
||||||
|
*
|
||||||
|
* Guarantees that if we see the UPROBE_COPY_INSN bit set, then
|
||||||
|
* we must also see the stores to &uprobe->arch performed by the
|
||||||
|
* prepare_uprobe() call.
|
||||||
|
*/
|
||||||
|
smp_rmb();
|
||||||
|
|
||||||
/* Tracing handlers use ->utask to communicate with fetch methods */
|
/* Tracing handlers use ->utask to communicate with fetch methods */
|
||||||
if (!get_utask())
|
if (!get_utask())
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -127,7 +127,7 @@ static void fill_kobj_path(struct kobject *kobj, char *path, int length)
|
||||||
int cur = strlen(kobject_name(parent));
|
int cur = strlen(kobject_name(parent));
|
||||||
/* back up enough to print this name with '/' */
|
/* back up enough to print this name with '/' */
|
||||||
length -= cur;
|
length -= cur;
|
||||||
strncpy(path + length, kobject_name(parent), cur);
|
memcpy(path + length, kobject_name(parent), cur);
|
||||||
*(path + --length) = '/';
|
*(path + --length) = '/';
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -81,7 +81,7 @@ static void __init test_hexdump_prepare_test(size_t len, int rowsize,
|
||||||
const char *q = *result++;
|
const char *q = *result++;
|
||||||
size_t amount = strlen(q);
|
size_t amount = strlen(q);
|
||||||
|
|
||||||
strncpy(p, q, amount);
|
memcpy(p, q, amount);
|
||||||
p += amount;
|
p += amount;
|
||||||
|
|
||||||
*p++ = ' ';
|
*p++ = ' ';
|
||||||
|
|
10
mm/hugetlb.c
10
mm/hugetlb.c
|
@ -4170,6 +4170,12 @@ int hugetlb_reserve_pages(struct inode *inode,
|
||||||
struct resv_map *resv_map;
|
struct resv_map *resv_map;
|
||||||
long gbl_reserve;
|
long gbl_reserve;
|
||||||
|
|
||||||
|
/* This should never happen */
|
||||||
|
if (from > to) {
|
||||||
|
VM_WARN(1, "%s called with a negative range\n", __func__);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Only apply hugepage reservation if asked. At fault time, an
|
* Only apply hugepage reservation if asked. At fault time, an
|
||||||
* attempt will be made for VM_NORESERVE to allocate a page
|
* attempt will be made for VM_NORESERVE to allocate a page
|
||||||
|
@ -4259,7 +4265,9 @@ int hugetlb_reserve_pages(struct inode *inode,
|
||||||
return 0;
|
return 0;
|
||||||
out_err:
|
out_err:
|
||||||
if (!vma || vma->vm_flags & VM_MAYSHARE)
|
if (!vma || vma->vm_flags & VM_MAYSHARE)
|
||||||
region_abort(resv_map, from, to);
|
/* Don't call region_abort if region_chg failed */
|
||||||
|
if (chg >= 0)
|
||||||
|
region_abort(resv_map, from, to);
|
||||||
if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
|
if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
|
||||||
kref_put(&resv_map->refs, resv_map_release);
|
kref_put(&resv_map->refs, resv_map_release);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -443,9 +443,13 @@ void truncate_inode_pages_final(struct address_space *mapping)
|
||||||
*/
|
*/
|
||||||
spin_lock_irq(&mapping->tree_lock);
|
spin_lock_irq(&mapping->tree_lock);
|
||||||
spin_unlock_irq(&mapping->tree_lock);
|
spin_unlock_irq(&mapping->tree_lock);
|
||||||
|
|
||||||
truncate_inode_pages(mapping, 0);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Cleancache needs notification even if there are no pages or shadow
|
||||||
|
* entries.
|
||||||
|
*/
|
||||||
|
truncate_inode_pages(mapping, 0);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(truncate_inode_pages_final);
|
EXPORT_SYMBOL(truncate_inode_pages_final);
|
||||||
|
|
||||||
|
|
|
@ -314,14 +314,30 @@ int ceph_auth_update_authorizer(struct ceph_auth_client *ac,
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(ceph_auth_update_authorizer);
|
EXPORT_SYMBOL(ceph_auth_update_authorizer);
|
||||||
|
|
||||||
|
int ceph_auth_add_authorizer_challenge(struct ceph_auth_client *ac,
|
||||||
|
struct ceph_authorizer *a,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len)
|
||||||
|
{
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&ac->mutex);
|
||||||
|
if (ac->ops && ac->ops->add_authorizer_challenge)
|
||||||
|
ret = ac->ops->add_authorizer_challenge(ac, a, challenge_buf,
|
||||||
|
challenge_buf_len);
|
||||||
|
mutex_unlock(&ac->mutex);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL(ceph_auth_add_authorizer_challenge);
|
||||||
|
|
||||||
int ceph_auth_verify_authorizer_reply(struct ceph_auth_client *ac,
|
int ceph_auth_verify_authorizer_reply(struct ceph_auth_client *ac,
|
||||||
struct ceph_authorizer *a, size_t len)
|
struct ceph_authorizer *a)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
mutex_lock(&ac->mutex);
|
mutex_lock(&ac->mutex);
|
||||||
if (ac->ops && ac->ops->verify_authorizer_reply)
|
if (ac->ops && ac->ops->verify_authorizer_reply)
|
||||||
ret = ac->ops->verify_authorizer_reply(ac, a, len);
|
ret = ac->ops->verify_authorizer_reply(ac, a);
|
||||||
mutex_unlock(&ac->mutex);
|
mutex_unlock(&ac->mutex);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,6 +8,7 @@
|
||||||
|
|
||||||
#include <linux/ceph/decode.h>
|
#include <linux/ceph/decode.h>
|
||||||
#include <linux/ceph/auth.h>
|
#include <linux/ceph/auth.h>
|
||||||
|
#include <linux/ceph/ceph_features.h>
|
||||||
#include <linux/ceph/libceph.h>
|
#include <linux/ceph/libceph.h>
|
||||||
#include <linux/ceph/messenger.h>
|
#include <linux/ceph/messenger.h>
|
||||||
|
|
||||||
|
@ -69,25 +70,40 @@ static int ceph_x_encrypt(struct ceph_crypto_key *secret, void *buf,
|
||||||
return sizeof(u32) + ciphertext_len;
|
return sizeof(u32) + ciphertext_len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int __ceph_x_decrypt(struct ceph_crypto_key *secret, void *p,
|
||||||
|
int ciphertext_len)
|
||||||
|
{
|
||||||
|
struct ceph_x_encrypt_header *hdr = p;
|
||||||
|
int plaintext_len;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = ceph_crypt(secret, false, p, ciphertext_len, ciphertext_len,
|
||||||
|
&plaintext_len);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (le64_to_cpu(hdr->magic) != CEPHX_ENC_MAGIC) {
|
||||||
|
pr_err("%s bad magic\n", __func__);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
return plaintext_len - sizeof(*hdr);
|
||||||
|
}
|
||||||
|
|
||||||
static int ceph_x_decrypt(struct ceph_crypto_key *secret, void **p, void *end)
|
static int ceph_x_decrypt(struct ceph_crypto_key *secret, void **p, void *end)
|
||||||
{
|
{
|
||||||
struct ceph_x_encrypt_header *hdr = *p + sizeof(u32);
|
int ciphertext_len;
|
||||||
int ciphertext_len, plaintext_len;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ceph_decode_32_safe(p, end, ciphertext_len, e_inval);
|
ceph_decode_32_safe(p, end, ciphertext_len, e_inval);
|
||||||
ceph_decode_need(p, end, ciphertext_len, e_inval);
|
ceph_decode_need(p, end, ciphertext_len, e_inval);
|
||||||
|
|
||||||
ret = ceph_crypt(secret, false, *p, end - *p, ciphertext_len,
|
ret = __ceph_x_decrypt(secret, *p, ciphertext_len);
|
||||||
&plaintext_len);
|
if (ret < 0)
|
||||||
if (ret)
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
if (hdr->struct_v != 1 || le64_to_cpu(hdr->magic) != CEPHX_ENC_MAGIC)
|
|
||||||
return -EPERM;
|
|
||||||
|
|
||||||
*p += ciphertext_len;
|
*p += ciphertext_len;
|
||||||
return plaintext_len - sizeof(struct ceph_x_encrypt_header);
|
return ret;
|
||||||
|
|
||||||
e_inval:
|
e_inval:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -271,6 +287,51 @@ bad:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Encode and encrypt the second part (ceph_x_authorize_b) of the
|
||||||
|
* authorizer. The first part (ceph_x_authorize_a) should already be
|
||||||
|
* encoded.
|
||||||
|
*/
|
||||||
|
static int encrypt_authorizer(struct ceph_x_authorizer *au,
|
||||||
|
u64 *server_challenge)
|
||||||
|
{
|
||||||
|
struct ceph_x_authorize_a *msg_a;
|
||||||
|
struct ceph_x_authorize_b *msg_b;
|
||||||
|
void *p, *end;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
msg_a = au->buf->vec.iov_base;
|
||||||
|
WARN_ON(msg_a->ticket_blob.secret_id != cpu_to_le64(au->secret_id));
|
||||||
|
p = (void *)(msg_a + 1) + le32_to_cpu(msg_a->ticket_blob.blob_len);
|
||||||
|
end = au->buf->vec.iov_base + au->buf->vec.iov_len;
|
||||||
|
|
||||||
|
msg_b = p + ceph_x_encrypt_offset();
|
||||||
|
msg_b->struct_v = 2;
|
||||||
|
msg_b->nonce = cpu_to_le64(au->nonce);
|
||||||
|
if (server_challenge) {
|
||||||
|
msg_b->have_challenge = 1;
|
||||||
|
msg_b->server_challenge_plus_one =
|
||||||
|
cpu_to_le64(*server_challenge + 1);
|
||||||
|
} else {
|
||||||
|
msg_b->have_challenge = 0;
|
||||||
|
msg_b->server_challenge_plus_one = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = ceph_x_encrypt(&au->session_key, p, end - p, sizeof(*msg_b));
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
p += ret;
|
||||||
|
if (server_challenge) {
|
||||||
|
WARN_ON(p != end);
|
||||||
|
} else {
|
||||||
|
WARN_ON(p > end);
|
||||||
|
au->buf->vec.iov_len = p - au->buf->vec.iov_base;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static void ceph_x_authorizer_cleanup(struct ceph_x_authorizer *au)
|
static void ceph_x_authorizer_cleanup(struct ceph_x_authorizer *au)
|
||||||
{
|
{
|
||||||
ceph_crypto_key_destroy(&au->session_key);
|
ceph_crypto_key_destroy(&au->session_key);
|
||||||
|
@ -287,7 +348,6 @@ static int ceph_x_build_authorizer(struct ceph_auth_client *ac,
|
||||||
int maxlen;
|
int maxlen;
|
||||||
struct ceph_x_authorize_a *msg_a;
|
struct ceph_x_authorize_a *msg_a;
|
||||||
struct ceph_x_authorize_b *msg_b;
|
struct ceph_x_authorize_b *msg_b;
|
||||||
void *p, *end;
|
|
||||||
int ret;
|
int ret;
|
||||||
int ticket_blob_len =
|
int ticket_blob_len =
|
||||||
(th->ticket_blob ? th->ticket_blob->vec.iov_len : 0);
|
(th->ticket_blob ? th->ticket_blob->vec.iov_len : 0);
|
||||||
|
@ -331,21 +391,13 @@ static int ceph_x_build_authorizer(struct ceph_auth_client *ac,
|
||||||
dout(" th %p secret_id %lld %lld\n", th, th->secret_id,
|
dout(" th %p secret_id %lld %lld\n", th, th->secret_id,
|
||||||
le64_to_cpu(msg_a->ticket_blob.secret_id));
|
le64_to_cpu(msg_a->ticket_blob.secret_id));
|
||||||
|
|
||||||
p = msg_a + 1;
|
|
||||||
p += ticket_blob_len;
|
|
||||||
end = au->buf->vec.iov_base + au->buf->vec.iov_len;
|
|
||||||
|
|
||||||
msg_b = p + ceph_x_encrypt_offset();
|
|
||||||
msg_b->struct_v = 1;
|
|
||||||
get_random_bytes(&au->nonce, sizeof(au->nonce));
|
get_random_bytes(&au->nonce, sizeof(au->nonce));
|
||||||
msg_b->nonce = cpu_to_le64(au->nonce);
|
ret = encrypt_authorizer(au, NULL);
|
||||||
ret = ceph_x_encrypt(&au->session_key, p, end - p, sizeof(*msg_b));
|
if (ret) {
|
||||||
if (ret < 0)
|
pr_err("failed to encrypt authorizer: %d", ret);
|
||||||
goto out_au;
|
goto out_au;
|
||||||
|
}
|
||||||
|
|
||||||
p += ret;
|
|
||||||
WARN_ON(p > end);
|
|
||||||
au->buf->vec.iov_len = p - au->buf->vec.iov_base;
|
|
||||||
dout(" built authorizer nonce %llx len %d\n", au->nonce,
|
dout(" built authorizer nonce %llx len %d\n", au->nonce,
|
||||||
(int)au->buf->vec.iov_len);
|
(int)au->buf->vec.iov_len);
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -622,8 +674,56 @@ static int ceph_x_update_authorizer(
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int decrypt_authorize_challenge(struct ceph_x_authorizer *au,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len,
|
||||||
|
u64 *server_challenge)
|
||||||
|
{
|
||||||
|
struct ceph_x_authorize_challenge *ch =
|
||||||
|
challenge_buf + sizeof(struct ceph_x_encrypt_header);
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
/* no leading len */
|
||||||
|
ret = __ceph_x_decrypt(&au->session_key, challenge_buf,
|
||||||
|
challenge_buf_len);
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
if (ret < sizeof(*ch)) {
|
||||||
|
pr_err("bad size %d for ceph_x_authorize_challenge\n", ret);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
*server_challenge = le64_to_cpu(ch->server_challenge);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ceph_x_add_authorizer_challenge(struct ceph_auth_client *ac,
|
||||||
|
struct ceph_authorizer *a,
|
||||||
|
void *challenge_buf,
|
||||||
|
int challenge_buf_len)
|
||||||
|
{
|
||||||
|
struct ceph_x_authorizer *au = (void *)a;
|
||||||
|
u64 server_challenge;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = decrypt_authorize_challenge(au, challenge_buf, challenge_buf_len,
|
||||||
|
&server_challenge);
|
||||||
|
if (ret) {
|
||||||
|
pr_err("failed to decrypt authorize challenge: %d", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = encrypt_authorizer(au, &server_challenge);
|
||||||
|
if (ret) {
|
||||||
|
pr_err("failed to encrypt authorizer w/ challenge: %d", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int ceph_x_verify_authorizer_reply(struct ceph_auth_client *ac,
|
static int ceph_x_verify_authorizer_reply(struct ceph_auth_client *ac,
|
||||||
struct ceph_authorizer *a, size_t len)
|
struct ceph_authorizer *a)
|
||||||
{
|
{
|
||||||
struct ceph_x_authorizer *au = (void *)a;
|
struct ceph_x_authorizer *au = (void *)a;
|
||||||
void *p = au->enc_buf;
|
void *p = au->enc_buf;
|
||||||
|
@ -633,8 +733,10 @@ static int ceph_x_verify_authorizer_reply(struct ceph_auth_client *ac,
|
||||||
ret = ceph_x_decrypt(&au->session_key, &p, p + CEPHX_AU_ENC_BUF_LEN);
|
ret = ceph_x_decrypt(&au->session_key, &p, p + CEPHX_AU_ENC_BUF_LEN);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
if (ret != sizeof(*reply))
|
if (ret < sizeof(*reply)) {
|
||||||
return -EPERM;
|
pr_err("bad size %d for ceph_x_authorize_reply\n", ret);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
if (au->nonce + 1 != le64_to_cpu(reply->nonce_plus_one))
|
if (au->nonce + 1 != le64_to_cpu(reply->nonce_plus_one))
|
||||||
ret = -EPERM;
|
ret = -EPERM;
|
||||||
|
@ -700,26 +802,64 @@ static int calc_signature(struct ceph_x_authorizer *au, struct ceph_msg *msg,
|
||||||
__le64 *psig)
|
__le64 *psig)
|
||||||
{
|
{
|
||||||
void *enc_buf = au->enc_buf;
|
void *enc_buf = au->enc_buf;
|
||||||
struct {
|
|
||||||
__le32 len;
|
|
||||||
__le32 header_crc;
|
|
||||||
__le32 front_crc;
|
|
||||||
__le32 middle_crc;
|
|
||||||
__le32 data_crc;
|
|
||||||
} __packed *sigblock = enc_buf + ceph_x_encrypt_offset();
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
sigblock->len = cpu_to_le32(4*sizeof(u32));
|
if (msg->con->peer_features & CEPH_FEATURE_CEPHX_V2) {
|
||||||
sigblock->header_crc = msg->hdr.crc;
|
struct {
|
||||||
sigblock->front_crc = msg->footer.front_crc;
|
__le32 len;
|
||||||
sigblock->middle_crc = msg->footer.middle_crc;
|
__le32 header_crc;
|
||||||
sigblock->data_crc = msg->footer.data_crc;
|
__le32 front_crc;
|
||||||
ret = ceph_x_encrypt(&au->session_key, enc_buf, CEPHX_AU_ENC_BUF_LEN,
|
__le32 middle_crc;
|
||||||
sizeof(*sigblock));
|
__le32 data_crc;
|
||||||
if (ret < 0)
|
} __packed *sigblock = enc_buf + ceph_x_encrypt_offset();
|
||||||
return ret;
|
|
||||||
|
sigblock->len = cpu_to_le32(4*sizeof(u32));
|
||||||
|
sigblock->header_crc = msg->hdr.crc;
|
||||||
|
sigblock->front_crc = msg->footer.front_crc;
|
||||||
|
sigblock->middle_crc = msg->footer.middle_crc;
|
||||||
|
sigblock->data_crc = msg->footer.data_crc;
|
||||||
|
|
||||||
|
ret = ceph_x_encrypt(&au->session_key, enc_buf,
|
||||||
|
CEPHX_AU_ENC_BUF_LEN, sizeof(*sigblock));
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
*psig = *(__le64 *)(enc_buf + sizeof(u32));
|
||||||
|
} else {
|
||||||
|
struct {
|
||||||
|
__le32 header_crc;
|
||||||
|
__le32 front_crc;
|
||||||
|
__le32 front_len;
|
||||||
|
__le32 middle_crc;
|
||||||
|
__le32 middle_len;
|
||||||
|
__le32 data_crc;
|
||||||
|
__le32 data_len;
|
||||||
|
__le32 seq_lower_word;
|
||||||
|
} __packed *sigblock = enc_buf;
|
||||||
|
struct {
|
||||||
|
__le64 a, b, c, d;
|
||||||
|
} __packed *penc = enc_buf;
|
||||||
|
int ciphertext_len;
|
||||||
|
|
||||||
|
sigblock->header_crc = msg->hdr.crc;
|
||||||
|
sigblock->front_crc = msg->footer.front_crc;
|
||||||
|
sigblock->front_len = msg->hdr.front_len;
|
||||||
|
sigblock->middle_crc = msg->footer.middle_crc;
|
||||||
|
sigblock->middle_len = msg->hdr.middle_len;
|
||||||
|
sigblock->data_crc = msg->footer.data_crc;
|
||||||
|
sigblock->data_len = msg->hdr.data_len;
|
||||||
|
sigblock->seq_lower_word = *(__le32 *)&msg->hdr.seq;
|
||||||
|
|
||||||
|
/* no leading len, no ceph_x_encrypt_header */
|
||||||
|
ret = ceph_crypt(&au->session_key, true, enc_buf,
|
||||||
|
CEPHX_AU_ENC_BUF_LEN, sizeof(*sigblock),
|
||||||
|
&ciphertext_len);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
*psig = penc->a ^ penc->b ^ penc->c ^ penc->d;
|
||||||
|
}
|
||||||
|
|
||||||
*psig = *(__le64 *)(enc_buf + sizeof(u32));
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -774,6 +914,7 @@ static const struct ceph_auth_client_ops ceph_x_ops = {
|
||||||
.handle_reply = ceph_x_handle_reply,
|
.handle_reply = ceph_x_handle_reply,
|
||||||
.create_authorizer = ceph_x_create_authorizer,
|
.create_authorizer = ceph_x_create_authorizer,
|
||||||
.update_authorizer = ceph_x_update_authorizer,
|
.update_authorizer = ceph_x_update_authorizer,
|
||||||
|
.add_authorizer_challenge = ceph_x_add_authorizer_challenge,
|
||||||
.verify_authorizer_reply = ceph_x_verify_authorizer_reply,
|
.verify_authorizer_reply = ceph_x_verify_authorizer_reply,
|
||||||
.invalidate_authorizer = ceph_x_invalidate_authorizer,
|
.invalidate_authorizer = ceph_x_invalidate_authorizer,
|
||||||
.reset = ceph_x_reset,
|
.reset = ceph_x_reset,
|
||||||
|
|
|
@ -69,6 +69,13 @@ struct ceph_x_authorize_a {
|
||||||
struct ceph_x_authorize_b {
|
struct ceph_x_authorize_b {
|
||||||
__u8 struct_v;
|
__u8 struct_v;
|
||||||
__le64 nonce;
|
__le64 nonce;
|
||||||
|
__u8 have_challenge;
|
||||||
|
__le64 server_challenge_plus_one;
|
||||||
|
} __attribute__ ((packed));
|
||||||
|
|
||||||
|
struct ceph_x_authorize_challenge {
|
||||||
|
__u8 struct_v;
|
||||||
|
__le64 server_challenge;
|
||||||
} __attribute__ ((packed));
|
} __attribute__ ((packed));
|
||||||
|
|
||||||
struct ceph_x_authorize_reply {
|
struct ceph_x_authorize_reply {
|
||||||
|
|
|
@ -1394,30 +1394,26 @@ static void prepare_write_keepalive(struct ceph_connection *con)
|
||||||
* Connection negotiation.
|
* Connection negotiation.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static struct ceph_auth_handshake *get_connect_authorizer(struct ceph_connection *con,
|
static int get_connect_authorizer(struct ceph_connection *con)
|
||||||
int *auth_proto)
|
|
||||||
{
|
{
|
||||||
struct ceph_auth_handshake *auth;
|
struct ceph_auth_handshake *auth;
|
||||||
|
int auth_proto;
|
||||||
|
|
||||||
if (!con->ops->get_authorizer) {
|
if (!con->ops->get_authorizer) {
|
||||||
|
con->auth = NULL;
|
||||||
con->out_connect.authorizer_protocol = CEPH_AUTH_UNKNOWN;
|
con->out_connect.authorizer_protocol = CEPH_AUTH_UNKNOWN;
|
||||||
con->out_connect.authorizer_len = 0;
|
con->out_connect.authorizer_len = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Can't hold the mutex while getting authorizer */
|
auth = con->ops->get_authorizer(con, &auth_proto, con->auth_retry);
|
||||||
mutex_unlock(&con->mutex);
|
|
||||||
auth = con->ops->get_authorizer(con, auth_proto, con->auth_retry);
|
|
||||||
mutex_lock(&con->mutex);
|
|
||||||
|
|
||||||
if (IS_ERR(auth))
|
if (IS_ERR(auth))
|
||||||
return auth;
|
return PTR_ERR(auth);
|
||||||
if (con->state != CON_STATE_NEGOTIATING)
|
|
||||||
return ERR_PTR(-EAGAIN);
|
|
||||||
|
|
||||||
con->auth_reply_buf = auth->authorizer_reply_buf;
|
con->auth = auth;
|
||||||
con->auth_reply_buf_len = auth->authorizer_reply_buf_len;
|
con->out_connect.authorizer_protocol = cpu_to_le32(auth_proto);
|
||||||
return auth;
|
con->out_connect.authorizer_len = cpu_to_le32(auth->authorizer_buf_len);
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -1433,12 +1429,22 @@ static void prepare_write_banner(struct ceph_connection *con)
|
||||||
con_flag_set(con, CON_FLAG_WRITE_PENDING);
|
con_flag_set(con, CON_FLAG_WRITE_PENDING);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void __prepare_write_connect(struct ceph_connection *con)
|
||||||
|
{
|
||||||
|
con_out_kvec_add(con, sizeof(con->out_connect), &con->out_connect);
|
||||||
|
if (con->auth)
|
||||||
|
con_out_kvec_add(con, con->auth->authorizer_buf_len,
|
||||||
|
con->auth->authorizer_buf);
|
||||||
|
|
||||||
|
con->out_more = 0;
|
||||||
|
con_flag_set(con, CON_FLAG_WRITE_PENDING);
|
||||||
|
}
|
||||||
|
|
||||||
static int prepare_write_connect(struct ceph_connection *con)
|
static int prepare_write_connect(struct ceph_connection *con)
|
||||||
{
|
{
|
||||||
unsigned int global_seq = get_global_seq(con->msgr, 0);
|
unsigned int global_seq = get_global_seq(con->msgr, 0);
|
||||||
int proto;
|
int proto;
|
||||||
int auth_proto;
|
int ret;
|
||||||
struct ceph_auth_handshake *auth;
|
|
||||||
|
|
||||||
switch (con->peer_name.type) {
|
switch (con->peer_name.type) {
|
||||||
case CEPH_ENTITY_TYPE_MON:
|
case CEPH_ENTITY_TYPE_MON:
|
||||||
|
@ -1465,24 +1471,11 @@ static int prepare_write_connect(struct ceph_connection *con)
|
||||||
con->out_connect.protocol_version = cpu_to_le32(proto);
|
con->out_connect.protocol_version = cpu_to_le32(proto);
|
||||||
con->out_connect.flags = 0;
|
con->out_connect.flags = 0;
|
||||||
|
|
||||||
auth_proto = CEPH_AUTH_UNKNOWN;
|
ret = get_connect_authorizer(con);
|
||||||
auth = get_connect_authorizer(con, &auth_proto);
|
if (ret)
|
||||||
if (IS_ERR(auth))
|
return ret;
|
||||||
return PTR_ERR(auth);
|
|
||||||
|
|
||||||
con->out_connect.authorizer_protocol = cpu_to_le32(auth_proto);
|
|
||||||
con->out_connect.authorizer_len = auth ?
|
|
||||||
cpu_to_le32(auth->authorizer_buf_len) : 0;
|
|
||||||
|
|
||||||
con_out_kvec_add(con, sizeof (con->out_connect),
|
|
||||||
&con->out_connect);
|
|
||||||
if (auth && auth->authorizer_buf_len)
|
|
||||||
con_out_kvec_add(con, auth->authorizer_buf_len,
|
|
||||||
auth->authorizer_buf);
|
|
||||||
|
|
||||||
con->out_more = 0;
|
|
||||||
con_flag_set(con, CON_FLAG_WRITE_PENDING);
|
|
||||||
|
|
||||||
|
__prepare_write_connect(con);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1743,11 +1736,21 @@ static int read_partial_connect(struct ceph_connection *con)
|
||||||
if (ret <= 0)
|
if (ret <= 0)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
size = le32_to_cpu(con->in_reply.authorizer_len);
|
if (con->auth) {
|
||||||
end += size;
|
size = le32_to_cpu(con->in_reply.authorizer_len);
|
||||||
ret = read_partial(con, end, size, con->auth_reply_buf);
|
if (size > con->auth->authorizer_reply_buf_len) {
|
||||||
if (ret <= 0)
|
pr_err("authorizer reply too big: %d > %zu\n", size,
|
||||||
goto out;
|
con->auth->authorizer_reply_buf_len);
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
end += size;
|
||||||
|
ret = read_partial(con, end, size,
|
||||||
|
con->auth->authorizer_reply_buf);
|
||||||
|
if (ret <= 0)
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
dout("read_partial_connect %p tag %d, con_seq = %u, g_seq = %u\n",
|
dout("read_partial_connect %p tag %d, con_seq = %u, g_seq = %u\n",
|
||||||
con, (int)con->in_reply.tag,
|
con, (int)con->in_reply.tag,
|
||||||
|
@ -1755,7 +1758,6 @@ static int read_partial_connect(struct ceph_connection *con)
|
||||||
le32_to_cpu(con->in_reply.global_seq));
|
le32_to_cpu(con->in_reply.global_seq));
|
||||||
out:
|
out:
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -2039,13 +2041,28 @@ static int process_connect(struct ceph_connection *con)
|
||||||
|
|
||||||
dout("process_connect on %p tag %d\n", con, (int)con->in_tag);
|
dout("process_connect on %p tag %d\n", con, (int)con->in_tag);
|
||||||
|
|
||||||
if (con->auth_reply_buf) {
|
if (con->auth) {
|
||||||
/*
|
/*
|
||||||
* Any connection that defines ->get_authorizer()
|
* Any connection that defines ->get_authorizer()
|
||||||
* should also define ->verify_authorizer_reply().
|
* should also define ->add_authorizer_challenge() and
|
||||||
|
* ->verify_authorizer_reply().
|
||||||
|
*
|
||||||
* See get_connect_authorizer().
|
* See get_connect_authorizer().
|
||||||
*/
|
*/
|
||||||
ret = con->ops->verify_authorizer_reply(con, 0);
|
if (con->in_reply.tag == CEPH_MSGR_TAG_CHALLENGE_AUTHORIZER) {
|
||||||
|
ret = con->ops->add_authorizer_challenge(
|
||||||
|
con, con->auth->authorizer_reply_buf,
|
||||||
|
le32_to_cpu(con->in_reply.authorizer_len));
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
con_out_kvec_reset(con);
|
||||||
|
__prepare_write_connect(con);
|
||||||
|
prepare_read_connect(con);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = con->ops->verify_authorizer_reply(con);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
con->error_msg = "bad authorize reply";
|
con->error_msg = "bad authorize reply";
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -4478,14 +4478,24 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con,
|
||||||
return auth;
|
return auth;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int add_authorizer_challenge(struct ceph_connection *con,
|
||||||
static int verify_authorizer_reply(struct ceph_connection *con, int len)
|
void *challenge_buf, int challenge_buf_len)
|
||||||
{
|
{
|
||||||
struct ceph_osd *o = con->private;
|
struct ceph_osd *o = con->private;
|
||||||
struct ceph_osd_client *osdc = o->o_osdc;
|
struct ceph_osd_client *osdc = o->o_osdc;
|
||||||
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
||||||
|
|
||||||
return ceph_auth_verify_authorizer_reply(ac, o->o_auth.authorizer, len);
|
return ceph_auth_add_authorizer_challenge(ac, o->o_auth.authorizer,
|
||||||
|
challenge_buf, challenge_buf_len);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int verify_authorizer_reply(struct ceph_connection *con)
|
||||||
|
{
|
||||||
|
struct ceph_osd *o = con->private;
|
||||||
|
struct ceph_osd_client *osdc = o->o_osdc;
|
||||||
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
||||||
|
|
||||||
|
return ceph_auth_verify_authorizer_reply(ac, o->o_auth.authorizer);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int invalidate_authorizer(struct ceph_connection *con)
|
static int invalidate_authorizer(struct ceph_connection *con)
|
||||||
|
@ -4519,6 +4529,7 @@ static const struct ceph_connection_operations osd_con_ops = {
|
||||||
.put = put_osd_con,
|
.put = put_osd_con,
|
||||||
.dispatch = dispatch,
|
.dispatch = dispatch,
|
||||||
.get_authorizer = get_authorizer,
|
.get_authorizer = get_authorizer,
|
||||||
|
.add_authorizer_challenge = add_authorizer_challenge,
|
||||||
.verify_authorizer_reply = verify_authorizer_reply,
|
.verify_authorizer_reply = verify_authorizer_reply,
|
||||||
.invalidate_authorizer = invalidate_authorizer,
|
.invalidate_authorizer = invalidate_authorizer,
|
||||||
.alloc_msg = alloc_msg,
|
.alloc_msg = alloc_msg,
|
||||||
|
|
|
@ -261,8 +261,8 @@ static struct net_device *__ip_tunnel_create(struct net *net,
|
||||||
} else {
|
} else {
|
||||||
if (strlen(ops->kind) > (IFNAMSIZ - 3))
|
if (strlen(ops->kind) > (IFNAMSIZ - 3))
|
||||||
goto failed;
|
goto failed;
|
||||||
strlcpy(name, ops->kind, IFNAMSIZ);
|
strcpy(name, ops->kind);
|
||||||
strncat(name, "%d", 2);
|
strcat(name, "%d");
|
||||||
}
|
}
|
||||||
|
|
||||||
ASSERT_RTNL();
|
ASSERT_RTNL();
|
||||||
|
|
|
@ -389,7 +389,7 @@ int tipc_topsrv_start(struct net *net)
|
||||||
topsrv->tipc_conn_new = tipc_subscrb_connect_cb;
|
topsrv->tipc_conn_new = tipc_subscrb_connect_cb;
|
||||||
topsrv->tipc_conn_release = tipc_subscrb_release_cb;
|
topsrv->tipc_conn_release = tipc_subscrb_release_cb;
|
||||||
|
|
||||||
strncpy(topsrv->name, name, strlen(name) + 1);
|
strscpy(topsrv->name, name, sizeof(topsrv->name));
|
||||||
tn->topsrv = topsrv;
|
tn->topsrv = topsrv;
|
||||||
atomic_set(&tn->subscription_count, 0);
|
atomic_set(&tn->subscription_count, 0);
|
||||||
|
|
||||||
|
|
|
@ -10,6 +10,8 @@
|
||||||
# are not supported by all versions of the compiler
|
# are not supported by all versions of the compiler
|
||||||
# ==========================================================================
|
# ==========================================================================
|
||||||
|
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, packed-not-aligned)
|
||||||
|
|
||||||
ifeq ("$(origin W)", "command line")
|
ifeq ("$(origin W)", "command line")
|
||||||
export KBUILD_ENABLE_EXTRA_GCC_CHECKS := $(W)
|
export KBUILD_ENABLE_EXTRA_GCC_CHECKS := $(W)
|
||||||
endif
|
endif
|
||||||
|
@ -25,6 +27,7 @@ warning-1 += -Wold-style-definition
|
||||||
warning-1 += $(call cc-option, -Wmissing-include-dirs)
|
warning-1 += $(call cc-option, -Wmissing-include-dirs)
|
||||||
warning-1 += $(call cc-option, -Wunused-but-set-variable)
|
warning-1 += $(call cc-option, -Wunused-but-set-variable)
|
||||||
warning-1 += $(call cc-option, -Wunused-const-variable)
|
warning-1 += $(call cc-option, -Wunused-const-variable)
|
||||||
|
warning-1 += $(call cc-option, -Wpacked-not-aligned)
|
||||||
warning-1 += $(call cc-disable-warning, missing-field-initializers)
|
warning-1 += $(call cc-disable-warning, missing-field-initializers)
|
||||||
warning-1 += $(call cc-disable-warning, sign-compare)
|
warning-1 += $(call cc-disable-warning, sign-compare)
|
||||||
|
|
||||||
|
|
|
@ -395,7 +395,7 @@ usage(void)
|
||||||
* When we have processed a group that starts off with a known-false
|
* When we have processed a group that starts off with a known-false
|
||||||
* #if/#elif sequence (which has therefore been deleted) followed by a
|
* #if/#elif sequence (which has therefore been deleted) followed by a
|
||||||
* #elif that we don't understand and therefore must keep, we edit the
|
* #elif that we don't understand and therefore must keep, we edit the
|
||||||
* latter into a #if to keep the nesting correct. We use strncpy() to
|
* latter into a #if to keep the nesting correct. We use memcpy() to
|
||||||
* overwrite the 4 byte token "elif" with "if " without a '\0' byte.
|
* overwrite the 4 byte token "elif" with "if " without a '\0' byte.
|
||||||
*
|
*
|
||||||
* When we find a true #elif in a group, the following block will
|
* When we find a true #elif in a group, the following block will
|
||||||
|
@ -450,7 +450,7 @@ static void Idrop (void) { Fdrop(); ignoreon(); }
|
||||||
static void Itrue (void) { Ftrue(); ignoreon(); }
|
static void Itrue (void) { Ftrue(); ignoreon(); }
|
||||||
static void Ifalse(void) { Ffalse(); ignoreon(); }
|
static void Ifalse(void) { Ffalse(); ignoreon(); }
|
||||||
/* modify this line */
|
/* modify this line */
|
||||||
static void Mpass (void) { strncpy(keyword, "if ", 4); Pelif(); }
|
static void Mpass (void) { memcpy(keyword, "if ", 4); Pelif(); }
|
||||||
static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); }
|
static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); }
|
||||||
static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); }
|
static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); }
|
||||||
static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
|
static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
|
||||||
|
|
|
@ -123,7 +123,7 @@ static int snd_trident_probe(struct pci_dev *pci,
|
||||||
} else {
|
} else {
|
||||||
strcpy(card->shortname, "Trident ");
|
strcpy(card->shortname, "Trident ");
|
||||||
}
|
}
|
||||||
strcat(card->shortname, card->driver);
|
strcat(card->shortname, str);
|
||||||
sprintf(card->longname, "%s PCI Audio at 0x%lx, irq %d",
|
sprintf(card->longname, "%s PCI Audio at 0x%lx, irq %d",
|
||||||
card->shortname, trident->port, trident->irq);
|
card->shortname, trident->port, trident->irq);
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue