This is the 4.9.41 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlmHya0ACgkQONu9yGCS aT7J4BAArYY7/NDbB+rIFtIdqdk3d6PiOyfQjQB2fSrhdR/+38a3Z+AtfNoPkppd DQsEnXuZyJeUP5GCNAue1urp0nUuuw/Kr/7DUZvfO8wnjFjS/hfKxnRE8o/S+acN y/TBFO3LuWSp8ROouAnAr7EEDrLtses6/boQHiioGcb/uqI9JVutkdnOTnm5xYJQ bAo7dEOlK4trRRGbbMju2FHTFHO/NStos3GD2ORcqS3ibj2FyHvGEmbyyBfmmi+2 Jf5cOvn99lszHweUTxnRw93q6RZcGtmhYoPoANzYdtoZiXkV++EaEAfQ6KPVqeno mLNPqmWWvgJUsd1uCaR89GXwczVJtqThvGuJMroau+ShLJWuI8mL8N/N+g9wThJr ormq8PTP3oea9u63hfae1hfr4KsPqQ6UTQcSY7OuT4mgchU5T/DuSyYShy3kdtx4 bT1hzpXwD9y0JCgPVfH1o7ad996D9/MVXpELNREcvrqEGsvRbxBx7gPc4FUzMqaz vft6z0MX8AvgtNLm9EY5a956ixRG27vGYA7drsLPFXKxIuHQ5oh2nBCupxkrDtGK 0FEWeV6p+DJZm4Gl+0pCYt6AqxzszNW0eVM7sKngbAIYQ77m2h50TOb0HIA0MrKu Nrjs68hKc/louKc1eYQZnt6bD4EVHVjbpgVP+l1PGpKo/OSnxes= =tb62 -----END PGP SIGNATURE----- Merge 4.9.41 into android-4.9 Changes in 4.9.41 af_key: Add lock to key dump pstore: Make spinlock per zone instead of global net: reduce skb_warn_bad_offload() noise jfs: Don't clear SGID when inheriting ACLs ALSA: fm801: Initialize chip after IRQ handler is registered ALSA: hda - Add missing NVIDIA GPU codec IDs to patch table parisc: Prevent TLB speculation on flushed pages on CPUs that only support equivalent aliases parisc: Extend disabled preemption in copy_user_page parisc: Suspend lockup detectors before system halt powerpc/pseries: Fix of_node_put() underflow during reconfig remove NFS: invalidate file size when taking a lock. NFSv4.1: Fix a race where CB_NOTIFY_LOCK fails to wake a waiter crypto: authencesn - Fix digest_null crash KVM: PPC: Book3S HV: Enable TM before accessing TM registers md/raid5: add thread_group worker async_tx_issue_pending_all drm/vmwgfx: Fix gcc-7.1.1 warning drm/nouveau/disp/nv50-: bump max chans to 21 drm/nouveau/bar/gf100: fix access to upper half of BAR2 KVM: PPC: Book3S HV: Restore critical SPRs to host values on guest exit KVM: PPC: Book3S HV: Save/restore host values of debug registers Revert "powerpc/numa: Fix percpu allocations to be NUMA aware" Staging: comedi: comedi_fops: Avoid orphaned proc entry drm: rcar-du: Simplify and fix probe error handling smp/hotplug: Move unparking of percpu threads to the control CPU smp/hotplug: Replace BUG_ON and react useful nfc: Fix hangup of RC-S380* in port100_send_ack() nfc: fdp: fix NULL pointer dereference net: phy: Do not perform software reset for Generic PHY isdn: Fix a sleep-in-atomic bug isdn/i4l: fix buffer overflow ath10k: fix null deref on wmi-tlv when trying spectral scan wil6210: fix deadlock when using fw_no_recovery option mailbox: always wait in mbox_send_message for blocking Tx mode mailbox: skip complete wait event if timer expired mailbox: handle empty message in tx_tick sched/cgroup: Move sched_online_group() back into css_online() to fix crash RDMA/uverbs: Fix the check for port number ipmi/watchdog: fix watchdog timeout set on reboot dentry name snapshots v4l: s5c73m3: fix negation operator pstore: Allow prz to control need for locking pstore: Correctly initialize spinlock and flags pstore: Use dynamic spinlock initializer net: skb_needs_check() accepts CHECKSUM_NONE for tx device-dax: fix sysfs duplicate warnings x86/mce/AMD: Make the init code more robust r8169: add support for RTL8168 series add-on card. ARM: omap2+: fixing wrong strcat for Non-NULL terminated string dt-bindings: power/supply: Update TPS65217 properties dt-bindings: input: Specify the interrupt number of TPS65217 power button ARM: dts: am57xx-idk: Put USB2 port in peripheral mode ARM: dts: n900: Mark eMMC slot with no-sdio and no-sd flags net/mlx5: Disable RoCE on the e-switch management port under switchdev mode ipv6: Should use consistent conditional judgement for ip6 fragment between __ip6_append_data and ip6_finish_output net/mlx4_core: Use-after-free causes a resource leak in flow-steering detach net/mlx4: Remove BUG_ON from ICM allocation routine net/mlx4_core: Fix raw qp flow steering rules under SRIOV drm/msm: Ensure that the hardware write pointer is valid drm/msm: Put back the vaddr in submit_reloc() drm/msm: Verify that MSM_SUBMIT_BO_FLAGS are set vfio-pci: use 32-bit comparisons for register address for gcc-4.5 irqchip/keystone: Fix "scheduling while atomic" on rt ASoC: tlv320aic3x: Mark the RESET register as volatile spi: dw: Make debugfs name unique between instances ASoC: nau8825: fix invalid configuration in Pre-Scalar of FLL irqchip/mxs: Enable SKIP_SET_WAKE and MASK_ON_SUSPEND openrisc: Add _text symbol to fix ksym build error dmaengine: ioatdma: Add Skylake PCI Dev ID dmaengine: ioatdma: workaround SKX ioatdma version l2tp: consider '::' as wildcard address in l2tp_ip6 socket lookup dmaengine: ti-dma-crossbar: Add some 'of_node_put()' in error path. usb: dwc3: omap: fix race of pm runtime with irq handler in probe ARM64: zynqmp: Fix W=1 dtc 1.4 warnings ARM64: zynqmp: Fix i2c node's compatible string perf probe: Fix to get correct modname from elf header ARM: s3c2410_defconfig: Fix invalid values for NF_CT_PROTO_* ACPI / scan: Prefer devices without _HID/_CID for _ADR matching usb: gadget: Fix copy/pasted error message Btrfs: use down_read_nested to make lockdep silent Btrfs: fix lockdep warning about log_mutex benet: stricter vxlan offloading check in be_features_check Btrfs: adjust outstanding_extents counter properly when dio write is split Xen: ARM: Zero reserved fields of xatp before making hypervisor call tools lib traceevent: Fix prev/next_prio for deadline tasks xfrm: Don't use sk_family for socket policy lookups perf tools: Install tools/lib/traceevent plugins with install-bin perf symbols: Robustify reading of build-id from sysfs video: fbdev: cobalt_lcdfb: Handle return NULL error from devm_ioremap vfio-pci: Handle error from pci_iomap arm64: mm: fix show_pte KERN_CONT fallout nvmem: imx-ocotp: Fix wrong register size net: usb: asix_devices: add .reset_resume for USB PM ASoC: fsl_ssi: set fifo watermark to more reliable value sh_eth: enable RX descriptor word 0 shift on SH7734 ARCv2: IRQ: Call entry/exit functions for chained handlers in MCIP ALSA: usb-audio: test EP_FLAG_RUNNING at urb completion x86/platform/intel-mid: Rename 'spidev' to 'mrfld_spidev' perf/x86: Set pmu->module in Intel PMU modules ASoC: Intel: bytcr-rt5640: fix settings in internal clock mode HID: ignore Petzl USB headlamp scsi: fnic: Avoid sending reset to firmware when another reset is in progress scsi: snic: Return error code on memory allocation failure scsi: bfa: Increase requested firmware version to 3.2.5.1 ASoC: Intel: Skylake: Release FW ctx in cleanup ASoC: dpcm: Avoid putting stream state to STOP when FE stream is paused Linux 4.9.41 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
e6b0c64f6f
119 changed files with 747 additions and 365 deletions
|
@ -8,8 +8,9 @@ This driver provides a simple power button event via an Interrupt.
|
|||
Required properties:
|
||||
- compatible: should be "ti,tps65217-pwrbutton" or "ti,tps65218-pwrbutton"
|
||||
|
||||
Required properties for TPS65218:
|
||||
Required properties:
|
||||
- interrupts: should be one of the following
|
||||
- <2>: For controllers compatible with tps65217
|
||||
- <3 IRQ_TYPE_EDGE_BOTH>: For controllers compatible with tps65218
|
||||
|
||||
Examples:
|
||||
|
@ -17,6 +18,7 @@ Examples:
|
|||
&tps {
|
||||
tps65217-pwrbutton {
|
||||
compatible = "ti,tps65217-pwrbutton";
|
||||
interrupts = <2>;
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -2,11 +2,16 @@ TPS65217 Charger
|
|||
|
||||
Required Properties:
|
||||
-compatible: "ti,tps65217-charger"
|
||||
-interrupts: TPS65217 interrupt numbers for the AC and USB charger input change.
|
||||
Should be <0> for the USB charger and <1> for the AC adapter.
|
||||
-interrupt-names: Should be "USB" and "AC"
|
||||
|
||||
This node is a subnode of the tps65217 PMIC.
|
||||
|
||||
Example:
|
||||
|
||||
tps65217-charger {
|
||||
compatible = "ti,tps65090-charger";
|
||||
compatible = "ti,tps65217-charger";
|
||||
interrupts = <0>, <1>;
|
||||
interrupt-names = "USB", "AC";
|
||||
};
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 40
|
||||
SUBLEVEL = 41
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
|
||||
#include <linux/smp.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <asm/irqflags-arcv2.h>
|
||||
#include <asm/mcip.h>
|
||||
|
@ -221,10 +222,13 @@ static irq_hw_number_t idu_first_hwirq;
|
|||
static void idu_cascade_isr(struct irq_desc *desc)
|
||||
{
|
||||
struct irq_domain *idu_domain = irq_desc_get_handler_data(desc);
|
||||
struct irq_chip *core_chip = irq_desc_get_chip(desc);
|
||||
irq_hw_number_t core_hwirq = irqd_to_hwirq(irq_desc_get_irq_data(desc));
|
||||
irq_hw_number_t idu_hwirq = core_hwirq - idu_first_hwirq;
|
||||
|
||||
chained_irq_enter(core_chip, desc);
|
||||
generic_handle_irq(irq_find_mapping(idu_domain, idu_hwirq));
|
||||
chained_irq_exit(core_chip, desc);
|
||||
}
|
||||
|
||||
static int idu_irq_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hwirq)
|
||||
|
|
|
@ -294,7 +294,7 @@
|
|||
};
|
||||
|
||||
&usb2 {
|
||||
dr_mode = "otg";
|
||||
dr_mode = "peripheral";
|
||||
};
|
||||
|
||||
&mmc2 {
|
||||
|
|
|
@ -734,6 +734,8 @@
|
|||
vmmc_aux-supply = <&vsim>;
|
||||
bus-width = <8>;
|
||||
non-removable;
|
||||
no-sdio;
|
||||
no-sd;
|
||||
};
|
||||
|
||||
&mmc3 {
|
||||
|
|
|
@ -86,9 +86,9 @@ CONFIG_IPV6_TUNNEL=m
|
|||
CONFIG_NETFILTER=y
|
||||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_SCTP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_SCTP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -790,14 +790,14 @@ static int _init_main_clk(struct omap_hwmod *oh)
|
|||
int ret = 0;
|
||||
char name[MOD_CLK_MAX_NAME_LEN];
|
||||
struct clk *clk;
|
||||
static const char modck[] = "_mod_ck";
|
||||
|
||||
/* +7 magic comes from '_mod_ck' suffix */
|
||||
if (strlen(oh->name) + 7 > MOD_CLK_MAX_NAME_LEN)
|
||||
if (strlen(oh->name) >= MOD_CLK_MAX_NAME_LEN - strlen(modck))
|
||||
pr_warn("%s: warning: cropping name for %s\n", __func__,
|
||||
oh->name);
|
||||
|
||||
strncpy(name, oh->name, MOD_CLK_MAX_NAME_LEN - 7);
|
||||
strcat(name, "_mod_ck");
|
||||
strlcpy(name, oh->name, MOD_CLK_MAX_NAME_LEN - strlen(modck));
|
||||
strlcat(name, modck, MOD_CLK_MAX_NAME_LEN);
|
||||
|
||||
clk = clk_get(NULL, name);
|
||||
if (!IS_ERR(clk)) {
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
stdout-path = "serial0:115200n8";
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
device_type = "memory";
|
||||
reg = <0x0 0x0 0x0 0x40000000>;
|
||||
};
|
||||
|
|
|
@ -72,7 +72,7 @@
|
|||
<1 10 0xf08>;
|
||||
};
|
||||
|
||||
amba_apu {
|
||||
amba_apu: amba_apu@0 {
|
||||
compatible = "simple-bus";
|
||||
#address-cells = <2>;
|
||||
#size-cells = <1>;
|
||||
|
@ -175,7 +175,7 @@
|
|||
};
|
||||
|
||||
i2c0: i2c@ff020000 {
|
||||
compatible = "cdns,i2c-r1p10";
|
||||
compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
|
||||
status = "disabled";
|
||||
interrupt-parent = <&gic>;
|
||||
interrupts = <0 17 4>;
|
||||
|
@ -185,7 +185,7 @@
|
|||
};
|
||||
|
||||
i2c1: i2c@ff030000 {
|
||||
compatible = "cdns,i2c-r1p10";
|
||||
compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
|
||||
status = "disabled";
|
||||
interrupt-parent = <&gic>;
|
||||
interrupts = <0 18 4>;
|
||||
|
|
|
@ -101,21 +101,21 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
|
|||
break;
|
||||
|
||||
pud = pud_offset(pgd, addr);
|
||||
printk(", *pud=%016llx", pud_val(*pud));
|
||||
pr_cont(", *pud=%016llx", pud_val(*pud));
|
||||
if (pud_none(*pud) || pud_bad(*pud))
|
||||
break;
|
||||
|
||||
pmd = pmd_offset(pud, addr);
|
||||
printk(", *pmd=%016llx", pmd_val(*pmd));
|
||||
pr_cont(", *pmd=%016llx", pmd_val(*pmd));
|
||||
if (pmd_none(*pmd) || pmd_bad(*pmd))
|
||||
break;
|
||||
|
||||
pte = pte_offset_map(pmd, addr);
|
||||
printk(", *pte=%016llx", pte_val(*pte));
|
||||
pr_cont(", *pte=%016llx", pte_val(*pte));
|
||||
pte_unmap(pte);
|
||||
} while(0);
|
||||
|
||||
printk("\n");
|
||||
pr_cont("\n");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_HW_AFDBM
|
||||
|
|
|
@ -38,6 +38,8 @@ SECTIONS
|
|||
/* Read-only sections, merged into text segment: */
|
||||
. = LOAD_BASE ;
|
||||
|
||||
_text = .;
|
||||
|
||||
/* _s_kernel_ro must be page aligned */
|
||||
. = ALIGN(PAGE_SIZE);
|
||||
_s_kernel_ro = .;
|
||||
|
|
|
@ -452,8 +452,8 @@ void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
|
|||
before it can be accessed through the kernel mapping. */
|
||||
preempt_disable();
|
||||
flush_dcache_page_asm(__pa(vfrom), vaddr);
|
||||
preempt_enable();
|
||||
copy_page_asm(vto, vfrom);
|
||||
preempt_enable();
|
||||
}
|
||||
EXPORT_SYMBOL(copy_user_page);
|
||||
|
||||
|
@ -538,6 +538,10 @@ void flush_cache_mm(struct mm_struct *mm)
|
|||
struct vm_area_struct *vma;
|
||||
pgd_t *pgd;
|
||||
|
||||
/* Flush the TLB to avoid speculation if coherency is required. */
|
||||
if (parisc_requires_coherency())
|
||||
flush_tlb_all();
|
||||
|
||||
/* Flushing the whole cache on each cpu takes forever on
|
||||
rp3440, etc. So, avoid it if the mm isn't too big. */
|
||||
if (mm_total_size(mm) >= parisc_cache_flush_threshold) {
|
||||
|
@ -594,33 +598,22 @@ flush_user_icache_range(unsigned long start, unsigned long end)
|
|||
void flush_cache_range(struct vm_area_struct *vma,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
unsigned long addr;
|
||||
pgd_t *pgd;
|
||||
|
||||
BUG_ON(!vma->vm_mm->context);
|
||||
|
||||
/* Flush the TLB to avoid speculation if coherency is required. */
|
||||
if (parisc_requires_coherency())
|
||||
flush_tlb_range(vma, start, end);
|
||||
|
||||
if ((end - start) >= parisc_cache_flush_threshold) {
|
||||
flush_cache_all();
|
||||
return;
|
||||
}
|
||||
|
||||
if (vma->vm_mm->context == mfsp(3)) {
|
||||
flush_user_dcache_range_asm(start, end);
|
||||
if (vma->vm_flags & VM_EXEC)
|
||||
flush_user_icache_range_asm(start, end);
|
||||
return;
|
||||
}
|
||||
BUG_ON(vma->vm_mm->context != mfsp(3));
|
||||
|
||||
pgd = vma->vm_mm->pgd;
|
||||
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
|
||||
unsigned long pfn;
|
||||
pte_t *ptep = get_ptep(pgd, addr);
|
||||
if (!ptep)
|
||||
continue;
|
||||
pfn = pte_pfn(*ptep);
|
||||
if (pfn_valid(pfn))
|
||||
__flush_cache_page(vma, addr, PFN_PHYS(pfn));
|
||||
}
|
||||
flush_user_dcache_range_asm(start, end);
|
||||
if (vma->vm_flags & VM_EXEC)
|
||||
flush_user_icache_range_asm(start, end);
|
||||
}
|
||||
|
||||
void
|
||||
|
@ -629,7 +622,8 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long
|
|||
BUG_ON(!vma->vm_mm->context);
|
||||
|
||||
if (pfn_valid(pfn)) {
|
||||
flush_tlb_page(vma, vmaddr);
|
||||
if (parisc_requires_coherency())
|
||||
flush_tlb_page(vma, vmaddr);
|
||||
__flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -50,6 +50,7 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/nmi.h>
|
||||
|
||||
#include <asm/io.h>
|
||||
#include <asm/asm-offsets.h>
|
||||
|
@ -142,6 +143,7 @@ void machine_power_off(void)
|
|||
|
||||
/* prevent soft lockup/stalled CPU messages for endless loop. */
|
||||
rcu_sysrq_start();
|
||||
lockup_detector_suspend();
|
||||
for (;;);
|
||||
}
|
||||
|
||||
|
|
|
@ -44,22 +44,8 @@ extern void __init dump_numa_cpu_topology(void);
|
|||
extern int sysfs_add_device_to_node(struct device *dev, int nid);
|
||||
extern void sysfs_remove_device_from_node(struct device *dev, int nid);
|
||||
|
||||
static inline int early_cpu_to_node(int cpu)
|
||||
{
|
||||
int nid;
|
||||
|
||||
nid = numa_cpu_lookup_table[cpu];
|
||||
|
||||
/*
|
||||
* Fall back to node 0 if nid is unset (it should be, except bugs).
|
||||
* This allows callers to safely do NODE_DATA(early_cpu_to_node(cpu)).
|
||||
*/
|
||||
return (nid < 0) ? 0 : nid;
|
||||
}
|
||||
#else
|
||||
|
||||
static inline int early_cpu_to_node(int cpu) { return 0; }
|
||||
|
||||
static inline void dump_numa_cpu_topology(void) {}
|
||||
|
||||
static inline int sysfs_add_device_to_node(struct device *dev, int nid)
|
||||
|
|
|
@ -595,7 +595,7 @@ void __init emergency_stack_init(void)
|
|||
|
||||
static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align)
|
||||
{
|
||||
return __alloc_bootmem_node(NODE_DATA(early_cpu_to_node(cpu)), size, align,
|
||||
return __alloc_bootmem_node(NODE_DATA(cpu_to_node(cpu)), size, align,
|
||||
__pa(MAX_DMA_ADDRESS));
|
||||
}
|
||||
|
||||
|
@ -606,7 +606,7 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
|
|||
|
||||
static int pcpu_cpu_distance(unsigned int from, unsigned int to)
|
||||
{
|
||||
if (early_cpu_to_node(from) == early_cpu_to_node(to))
|
||||
if (cpu_to_node(from) == cpu_to_node(to))
|
||||
return LOCAL_DISTANCE;
|
||||
else
|
||||
return REMOTE_DISTANCE;
|
||||
|
|
|
@ -2808,6 +2808,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
int r;
|
||||
int srcu_idx;
|
||||
unsigned long ebb_regs[3] = {}; /* shut up GCC */
|
||||
unsigned long user_tar = 0;
|
||||
unsigned int user_vrsave;
|
||||
|
||||
if (!vcpu->arch.sane) {
|
||||
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
|
||||
|
@ -2828,6 +2830,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
run->fail_entry.hardware_entry_failure_reason = 0;
|
||||
return -EINVAL;
|
||||
}
|
||||
/* Enable TM so we can read the TM SPRs */
|
||||
mtmsr(mfmsr() | MSR_TM);
|
||||
current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
|
||||
current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
|
||||
current->thread.tm_texasr = mfspr(SPRN_TEXASR);
|
||||
|
@ -2856,12 +2860,14 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
|
||||
flush_all_to_thread(current);
|
||||
|
||||
/* Save userspace EBB register values */
|
||||
/* Save userspace EBB and other register values */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
|
||||
ebb_regs[0] = mfspr(SPRN_EBBHR);
|
||||
ebb_regs[1] = mfspr(SPRN_EBBRR);
|
||||
ebb_regs[2] = mfspr(SPRN_BESCR);
|
||||
user_tar = mfspr(SPRN_TAR);
|
||||
}
|
||||
user_vrsave = mfspr(SPRN_VRSAVE);
|
||||
|
||||
vcpu->arch.wqp = &vcpu->arch.vcore->wq;
|
||||
vcpu->arch.pgdir = current->mm->pgd;
|
||||
|
@ -2885,12 +2891,15 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
r = kvmppc_xics_rm_complete(vcpu, 0);
|
||||
} while (is_kvmppc_resume_guest(r));
|
||||
|
||||
/* Restore userspace EBB register values */
|
||||
/* Restore userspace EBB and other register values */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
|
||||
mtspr(SPRN_EBBHR, ebb_regs[0]);
|
||||
mtspr(SPRN_EBBRR, ebb_regs[1]);
|
||||
mtspr(SPRN_BESCR, ebb_regs[2]);
|
||||
mtspr(SPRN_TAR, user_tar);
|
||||
mtspr(SPRN_FSCR, current->thread.fscr);
|
||||
}
|
||||
mtspr(SPRN_VRSAVE, user_vrsave);
|
||||
|
||||
out:
|
||||
vcpu->arch.state = KVMPPC_VCPU_NOTREADY;
|
||||
|
|
|
@ -37,6 +37,13 @@
|
|||
#define NAPPING_CEDE 1
|
||||
#define NAPPING_NOVCPU 2
|
||||
|
||||
/* Stack frame offsets for kvmppc_hv_entry */
|
||||
#define SFS 112
|
||||
#define STACK_SLOT_TRAP (SFS-4)
|
||||
#define STACK_SLOT_CIABR (SFS-16)
|
||||
#define STACK_SLOT_DAWR (SFS-24)
|
||||
#define STACK_SLOT_DAWRX (SFS-32)
|
||||
|
||||
/*
|
||||
* Call kvmppc_hv_entry in real mode.
|
||||
* Must be called with interrupts hard-disabled.
|
||||
|
@ -289,10 +296,10 @@ kvm_novcpu_exit:
|
|||
bl kvmhv_accumulate_time
|
||||
#endif
|
||||
13: mr r3, r12
|
||||
stw r12, 112-4(r1)
|
||||
stw r12, STACK_SLOT_TRAP(r1)
|
||||
bl kvmhv_commence_exit
|
||||
nop
|
||||
lwz r12, 112-4(r1)
|
||||
lwz r12, STACK_SLOT_TRAP(r1)
|
||||
b kvmhv_switch_to_host
|
||||
|
||||
/*
|
||||
|
@ -537,7 +544,7 @@ kvmppc_hv_entry:
|
|||
*/
|
||||
mflr r0
|
||||
std r0, PPC_LR_STKOFF(r1)
|
||||
stdu r1, -112(r1)
|
||||
stdu r1, -SFS(r1)
|
||||
|
||||
/* Save R1 in the PACA */
|
||||
std r1, HSTATE_HOST_R1(r13)
|
||||
|
@ -698,6 +705,16 @@ kvmppc_got_guest:
|
|||
mtspr SPRN_PURR,r7
|
||||
mtspr SPRN_SPURR,r8
|
||||
|
||||
/* Save host values of some registers */
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r5, SPRN_CIABR
|
||||
mfspr r6, SPRN_DAWR
|
||||
mfspr r7, SPRN_DAWRX
|
||||
std r5, STACK_SLOT_CIABR(r1)
|
||||
std r6, STACK_SLOT_DAWR(r1)
|
||||
std r7, STACK_SLOT_DAWRX(r1)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Set partition DABR */
|
||||
/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
|
||||
|
@ -1361,8 +1378,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
|||
*/
|
||||
li r0, 0
|
||||
mtspr SPRN_IAMR, r0
|
||||
mtspr SPRN_CIABR, r0
|
||||
mtspr SPRN_DAWRX, r0
|
||||
mtspr SPRN_PSPB, r0
|
||||
mtspr SPRN_TCSCR, r0
|
||||
mtspr SPRN_WORT, r0
|
||||
/* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
|
||||
|
@ -1378,6 +1394,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
|||
std r6,VCPU_UAMOR(r9)
|
||||
li r6,0
|
||||
mtspr SPRN_AMR,r6
|
||||
mtspr SPRN_UAMOR, r6
|
||||
|
||||
/* Switch DSCR back to host value */
|
||||
mfspr r8, SPRN_DSCR
|
||||
|
@ -1519,6 +1536,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
slbia
|
||||
ptesync
|
||||
|
||||
/* Restore host values of some registers */
|
||||
BEGIN_FTR_SECTION
|
||||
ld r5, STACK_SLOT_CIABR(r1)
|
||||
ld r6, STACK_SLOT_DAWR(r1)
|
||||
ld r7, STACK_SLOT_DAWRX(r1)
|
||||
mtspr SPRN_CIABR, r5
|
||||
mtspr SPRN_DAWR, r6
|
||||
mtspr SPRN_DAWRX, r7
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
||||
|
||||
/*
|
||||
* POWER7/POWER8 guest -> host partition switch code.
|
||||
* We don't have to lock against tlbies but we do
|
||||
|
@ -1652,8 +1679,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
li r0, KVM_GUEST_MODE_NONE
|
||||
stb r0, HSTATE_IN_GUEST(r13)
|
||||
|
||||
ld r0, 112+PPC_LR_STKOFF(r1)
|
||||
addi r1, r1, 112
|
||||
ld r0, SFS+PPC_LR_STKOFF(r1)
|
||||
addi r1, r1, SFS
|
||||
mtlr r0
|
||||
blr
|
||||
|
||||
|
|
|
@ -82,7 +82,6 @@ static int pSeries_reconfig_remove_node(struct device_node *np)
|
|||
|
||||
of_detach_node(np);
|
||||
of_node_put(parent);
|
||||
of_node_put(np); /* Must decrement the refcount */
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -434,6 +434,7 @@ static struct pmu cstate_core_pmu = {
|
|||
.stop = cstate_pmu_event_stop,
|
||||
.read = cstate_pmu_event_update,
|
||||
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
|
||||
.module = THIS_MODULE,
|
||||
};
|
||||
|
||||
static struct pmu cstate_pkg_pmu = {
|
||||
|
@ -447,6 +448,7 @@ static struct pmu cstate_pkg_pmu = {
|
|||
.stop = cstate_pmu_event_stop,
|
||||
.read = cstate_pmu_event_update,
|
||||
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
|
||||
.module = THIS_MODULE,
|
||||
};
|
||||
|
||||
static const struct cstate_model nhm_cstates __initconst = {
|
||||
|
|
|
@ -697,6 +697,7 @@ static int __init init_rapl_pmus(void)
|
|||
rapl_pmus->pmu.start = rapl_pmu_event_start;
|
||||
rapl_pmus->pmu.stop = rapl_pmu_event_stop;
|
||||
rapl_pmus->pmu.read = rapl_pmu_event_read;
|
||||
rapl_pmus->pmu.module = THIS_MODULE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -733,6 +733,7 @@ static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
|
|||
.start = uncore_pmu_event_start,
|
||||
.stop = uncore_pmu_event_stop,
|
||||
.read = uncore_pmu_event_read,
|
||||
.module = THIS_MODULE,
|
||||
};
|
||||
} else {
|
||||
pmu->pmu = *pmu->type->pmu;
|
||||
|
|
|
@ -955,6 +955,9 @@ static int threshold_create_bank(unsigned int cpu, unsigned int bank)
|
|||
const char *name = get_name(bank, NULL);
|
||||
int err = 0;
|
||||
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
if (is_shared_bank(bank)) {
|
||||
nb = node_to_amd_nb(amd_get_nb_id(cpu));
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ obj-$(subst m,y,$(CONFIG_INTEL_MID_POWER_BUTTON)) += platform_msic_power_btn.o
|
|||
obj-$(subst m,y,$(CONFIG_GPIO_INTEL_PMIC)) += platform_pmic_gpio.o
|
||||
obj-$(subst m,y,$(CONFIG_INTEL_MFLD_THERMAL)) += platform_msic_thermal.o
|
||||
# SPI Devices
|
||||
obj-$(subst m,y,$(CONFIG_SPI_SPIDEV)) += platform_spidev.o
|
||||
obj-$(subst m,y,$(CONFIG_SPI_SPIDEV)) += platform_mrfld_spidev.o
|
||||
# I2C Devices
|
||||
obj-$(subst m,y,$(CONFIG_SENSORS_EMC1403)) += platform_emc1403.o
|
||||
obj-$(subst m,y,$(CONFIG_SENSORS_LIS3LV02D)) += platform_lis331.o
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
* of the License.
|
||||
*/
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/sfi.h>
|
||||
#include <linux/spi/pxa2xx_spi.h>
|
||||
|
@ -34,6 +35,9 @@ static void __init *spidev_platform_data(void *info)
|
|||
{
|
||||
struct spi_board_info *spi_info = info;
|
||||
|
||||
if (intel_mid_identify_cpu() != INTEL_MID_CPU_CHIP_TANGIER)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
spi_info->mode = SPI_MODE_0;
|
||||
spi_info->controller_data = &spidev_spi_chip;
|
||||
|
|
@ -248,6 +248,9 @@ static int crypto_authenc_esn_decrypt_tail(struct aead_request *req,
|
|||
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
|
||||
u32 tmp[2];
|
||||
|
||||
if (!authsize)
|
||||
goto decrypt;
|
||||
|
||||
/* Move high-order bits of sequence number back. */
|
||||
scatterwalk_map_and_copy(tmp, dst, 4, 4, 0);
|
||||
scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0);
|
||||
|
@ -256,6 +259,8 @@ static int crypto_authenc_esn_decrypt_tail(struct aead_request *req,
|
|||
if (crypto_memneq(ihash, ohash, authsize))
|
||||
return -EBADMSG;
|
||||
|
||||
decrypt:
|
||||
|
||||
sg_init_table(areq_ctx->dst, 2);
|
||||
dst = scatterwalk_ffwd(areq_ctx->dst, dst, assoclen);
|
||||
|
||||
|
|
|
@ -98,7 +98,15 @@ static int find_child_checks(struct acpi_device *adev, bool check_children)
|
|||
if (check_children && list_empty(&adev->children))
|
||||
return -ENODEV;
|
||||
|
||||
return sta_present ? FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE;
|
||||
/*
|
||||
* If the device has a _HID (or _CID) returning a valid ACPI/PNP
|
||||
* device ID, it is better to make it look less attractive here, so that
|
||||
* the other device with the same _ADR value (that may not have a valid
|
||||
* device ID) can be matched going forward. [This means a second spec
|
||||
* violation in a row, so whatever we do here is best effort anyway.]
|
||||
*/
|
||||
return sta_present && list_empty(&adev->pnp.ids) ?
|
||||
FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE;
|
||||
}
|
||||
|
||||
struct acpi_device *acpi_find_child_device(struct acpi_device *parent,
|
||||
|
|
|
@ -1162,10 +1162,11 @@ static int wdog_reboot_handler(struct notifier_block *this,
|
|||
ipmi_watchdog_state = WDOG_TIMEOUT_NONE;
|
||||
ipmi_set_timeout(IPMI_SET_TIMEOUT_NO_HB);
|
||||
} else if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) {
|
||||
/* Set a long timer to let the reboot happens, but
|
||||
reboot if it hangs, but only if the watchdog
|
||||
/* Set a long timer to let the reboot happen or
|
||||
reset if it hangs, but only if the watchdog
|
||||
timer was already running. */
|
||||
timeout = 120;
|
||||
if (timeout < 120)
|
||||
timeout = 120;
|
||||
pretimeout = 0;
|
||||
ipmi_watchdog_state = WDOG_TIMEOUT_RESET;
|
||||
ipmi_set_timeout(IPMI_SET_TIMEOUT_NO_HB);
|
||||
|
|
|
@ -546,7 +546,8 @@ static void dax_dev_release(struct device *dev)
|
|||
struct dax_dev *dax_dev = to_dax_dev(dev);
|
||||
struct dax_region *dax_region = dax_dev->region;
|
||||
|
||||
ida_simple_remove(&dax_region->ida, dax_dev->id);
|
||||
if (dax_dev->id >= 0)
|
||||
ida_simple_remove(&dax_region->ida, dax_dev->id);
|
||||
ida_simple_remove(&dax_minor_ida, MINOR(dev->devt));
|
||||
dax_region_put(dax_region);
|
||||
iput(dax_dev->inode);
|
||||
|
@ -581,7 +582,7 @@ static void unregister_dax_dev(void *dev)
|
|||
}
|
||||
|
||||
struct dax_dev *devm_create_dax_dev(struct dax_region *dax_region,
|
||||
struct resource *res, int count)
|
||||
int id, struct resource *res, int count)
|
||||
{
|
||||
struct device *parent = dax_region->dev;
|
||||
struct dax_dev *dax_dev;
|
||||
|
@ -608,10 +609,16 @@ struct dax_dev *devm_create_dax_dev(struct dax_region *dax_region,
|
|||
if (i < count)
|
||||
goto err_id;
|
||||
|
||||
dax_dev->id = ida_simple_get(&dax_region->ida, 0, 0, GFP_KERNEL);
|
||||
if (dax_dev->id < 0) {
|
||||
rc = dax_dev->id;
|
||||
goto err_id;
|
||||
if (id < 0) {
|
||||
id = ida_simple_get(&dax_region->ida, 0, 0, GFP_KERNEL);
|
||||
dax_dev->id = id;
|
||||
if (id < 0) {
|
||||
rc = id;
|
||||
goto err_id;
|
||||
}
|
||||
} else {
|
||||
/* region provider owns @id lifetime */
|
||||
dax_dev->id = -1;
|
||||
}
|
||||
|
||||
minor = ida_simple_get(&dax_minor_ida, 0, 0, GFP_KERNEL);
|
||||
|
@ -650,7 +657,7 @@ struct dax_dev *devm_create_dax_dev(struct dax_region *dax_region,
|
|||
dev->parent = parent;
|
||||
dev->groups = dax_attribute_groups;
|
||||
dev->release = dax_dev_release;
|
||||
dev_set_name(dev, "dax%d.%d", dax_region->id, dax_dev->id);
|
||||
dev_set_name(dev, "dax%d.%d", dax_region->id, id);
|
||||
rc = device_add(dev);
|
||||
if (rc) {
|
||||
kill_dax_dev(dax_dev);
|
||||
|
@ -669,7 +676,8 @@ struct dax_dev *devm_create_dax_dev(struct dax_region *dax_region,
|
|||
err_inode:
|
||||
ida_simple_remove(&dax_minor_ida, minor);
|
||||
err_minor:
|
||||
ida_simple_remove(&dax_region->ida, dax_dev->id);
|
||||
if (dax_dev->id >= 0)
|
||||
ida_simple_remove(&dax_region->ida, dax_dev->id);
|
||||
err_id:
|
||||
kfree(dax_dev);
|
||||
|
||||
|
|
|
@ -21,5 +21,5 @@ struct dax_region *alloc_dax_region(struct device *parent,
|
|||
int region_id, struct resource *res, unsigned int align,
|
||||
void *addr, unsigned long flags);
|
||||
struct dax_dev *devm_create_dax_dev(struct dax_region *dax_region,
|
||||
struct resource *res, int count);
|
||||
int id, struct resource *res, int count);
|
||||
#endif /* __DAX_H__ */
|
||||
|
|
|
@ -58,13 +58,12 @@ static void dax_pmem_percpu_kill(void *data)
|
|||
|
||||
static int dax_pmem_probe(struct device *dev)
|
||||
{
|
||||
int rc;
|
||||
void *addr;
|
||||
struct resource res;
|
||||
struct dax_dev *dax_dev;
|
||||
int rc, id, region_id;
|
||||
struct nd_pfn_sb *pfn_sb;
|
||||
struct dax_pmem *dax_pmem;
|
||||
struct nd_region *nd_region;
|
||||
struct nd_namespace_io *nsio;
|
||||
struct dax_region *dax_region;
|
||||
struct nd_namespace_common *ndns;
|
||||
|
@ -122,14 +121,17 @@ static int dax_pmem_probe(struct device *dev)
|
|||
/* adjust the dax_region resource to the start of data */
|
||||
res.start += le64_to_cpu(pfn_sb->dataoff);
|
||||
|
||||
nd_region = to_nd_region(dev->parent);
|
||||
dax_region = alloc_dax_region(dev, nd_region->id, &res,
|
||||
rc = sscanf(dev_name(&ndns->dev), "namespace%d.%d", ®ion_id, &id);
|
||||
if (rc != 2)
|
||||
return -EINVAL;
|
||||
|
||||
dax_region = alloc_dax_region(dev, region_id, &res,
|
||||
le32_to_cpu(pfn_sb->align), addr, PFN_DEV|PFN_MAP);
|
||||
if (!dax_region)
|
||||
return -ENOMEM;
|
||||
|
||||
/* TODO: support for subdividing a dax region... */
|
||||
dax_dev = devm_create_dax_dev(dax_region, &res, 1);
|
||||
dax_dev = devm_create_dax_dev(dax_region, id, &res, 1);
|
||||
|
||||
/* child dax_dev instances now own the lifetime of the dax_region */
|
||||
dax_region_put(dax_region);
|
||||
|
|
|
@ -64,6 +64,8 @@
|
|||
#define PCI_DEVICE_ID_INTEL_IOAT_BDX8 0x6f2e
|
||||
#define PCI_DEVICE_ID_INTEL_IOAT_BDX9 0x6f2f
|
||||
|
||||
#define PCI_DEVICE_ID_INTEL_IOAT_SKX 0x2021
|
||||
|
||||
#define IOAT_VER_1_2 0x12 /* Version 1.2 */
|
||||
#define IOAT_VER_2_0 0x20 /* Version 2.0 */
|
||||
#define IOAT_VER_3_0 0x30 /* Version 3.0 */
|
||||
|
|
|
@ -106,6 +106,8 @@ static struct pci_device_id ioat_pci_tbl[] = {
|
|||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDX8) },
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDX9) },
|
||||
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SKX) },
|
||||
|
||||
/* I/OAT v3.3 platforms */
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD0) },
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD1) },
|
||||
|
@ -243,10 +245,15 @@ static bool is_bdx_ioat(struct pci_dev *pdev)
|
|||
}
|
||||
}
|
||||
|
||||
static inline bool is_skx_ioat(struct pci_dev *pdev)
|
||||
{
|
||||
return (pdev->device == PCI_DEVICE_ID_INTEL_IOAT_SKX) ? true : false;
|
||||
}
|
||||
|
||||
static bool is_xeon_cb32(struct pci_dev *pdev)
|
||||
{
|
||||
return is_jf_ioat(pdev) || is_snb_ioat(pdev) || is_ivb_ioat(pdev) ||
|
||||
is_hsw_ioat(pdev) || is_bdx_ioat(pdev);
|
||||
is_hsw_ioat(pdev) || is_bdx_ioat(pdev) || is_skx_ioat(pdev);
|
||||
}
|
||||
|
||||
bool is_bwd_ioat(struct pci_dev *pdev)
|
||||
|
@ -1350,6 +1357,8 @@ static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
|
||||
device->version = readb(device->reg_base + IOAT_VER_OFFSET);
|
||||
if (device->version >= IOAT_VER_3_0) {
|
||||
if (is_skx_ioat(pdev))
|
||||
device->version = IOAT_VER_3_2;
|
||||
err = ioat3_dma_probe(device, ioat_dca_enabled);
|
||||
|
||||
if (device->version >= IOAT_VER_3_3)
|
||||
|
|
|
@ -149,6 +149,7 @@ static int ti_am335x_xbar_probe(struct platform_device *pdev)
|
|||
match = of_match_node(ti_am335x_master_match, dma_node);
|
||||
if (!match) {
|
||||
dev_err(&pdev->dev, "DMA master is not supported\n");
|
||||
of_node_put(dma_node);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -339,6 +340,7 @@ static int ti_dra7_xbar_probe(struct platform_device *pdev)
|
|||
match = of_match_node(ti_dra7_master_match, dma_node);
|
||||
if (!match) {
|
||||
dev_err(&pdev->dev, "DMA master is not supported\n");
|
||||
of_node_put(dma_node);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -210,7 +210,14 @@ void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
|
|||
void adreno_flush(struct msm_gpu *gpu)
|
||||
{
|
||||
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
|
||||
uint32_t wptr = get_wptr(gpu->rb);
|
||||
uint32_t wptr;
|
||||
|
||||
/*
|
||||
* Mask wptr value that we calculate to fit in the HW range. This is
|
||||
* to account for the possibility that the last command fit exactly into
|
||||
* the ringbuffer and rb->next hasn't wrapped to zero yet
|
||||
*/
|
||||
wptr = get_wptr(gpu->rb) & ((gpu->rb->size / 4) - 1);
|
||||
|
||||
/* ensure writes to ringbuffer have hit system memory: */
|
||||
mb();
|
||||
|
|
|
@ -106,7 +106,8 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
|
|||
pagefault_disable();
|
||||
}
|
||||
|
||||
if (submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) {
|
||||
if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) ||
|
||||
!(submit_bo.flags & MSM_SUBMIT_BO_FLAGS)) {
|
||||
DRM_ERROR("invalid flags: %x\n", submit_bo.flags);
|
||||
ret = -EINVAL;
|
||||
goto out_unlock;
|
||||
|
@ -290,7 +291,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
|
|||
{
|
||||
uint32_t i, last_offset = 0;
|
||||
uint32_t *ptr;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
if (offset % 4) {
|
||||
DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset);
|
||||
|
@ -317,12 +318,13 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
|
|||
|
||||
ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc));
|
||||
if (ret)
|
||||
return -EFAULT;
|
||||
goto out;
|
||||
|
||||
if (submit_reloc.submit_offset % 4) {
|
||||
DRM_ERROR("non-aligned reloc offset: %u\n",
|
||||
submit_reloc.submit_offset);
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* offset in dwords: */
|
||||
|
@ -331,12 +333,13 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
|
|||
if ((off >= (obj->base.size / 4)) ||
|
||||
(off < last_offset)) {
|
||||
DRM_ERROR("invalid offset %u at reloc %u\n", off, i);
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto out;
|
||||
|
||||
if (valid)
|
||||
continue;
|
||||
|
@ -353,9 +356,10 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
|
|||
last_offset = off;
|
||||
}
|
||||
|
||||
out:
|
||||
msm_gem_put_vaddr_locked(&obj->base);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void submit_cleanup(struct msm_gem_submit *submit)
|
||||
|
|
|
@ -23,7 +23,8 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int size)
|
|||
struct msm_ringbuffer *ring;
|
||||
int ret;
|
||||
|
||||
size = ALIGN(size, 4); /* size should be dword aligned */
|
||||
if (WARN_ON(!is_power_of_2(size)))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
ring = kzalloc(sizeof(*ring), GFP_KERNEL);
|
||||
if (!ring) {
|
||||
|
|
|
@ -27,7 +27,7 @@ struct nv50_disp {
|
|||
u8 type[3];
|
||||
} pior;
|
||||
|
||||
struct nv50_disp_chan *chan[17];
|
||||
struct nv50_disp_chan *chan[21];
|
||||
};
|
||||
|
||||
int nv50_disp_root_scanoutpos(NV50_DISP_MTHD_V0);
|
||||
|
|
|
@ -129,7 +129,7 @@ gf100_bar_init(struct nvkm_bar *base)
|
|||
|
||||
if (bar->bar[0].mem) {
|
||||
addr = nvkm_memory_addr(bar->bar[0].mem) >> 12;
|
||||
nvkm_wr32(device, 0x001714, 0xc0000000 | addr);
|
||||
nvkm_wr32(device, 0x001714, 0x80000000 | addr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -285,7 +285,6 @@ static int rcar_du_remove(struct platform_device *pdev)
|
|||
|
||||
drm_kms_helper_poll_fini(ddev);
|
||||
drm_mode_config_cleanup(ddev);
|
||||
drm_vblank_cleanup(ddev);
|
||||
|
||||
drm_dev_unref(ddev);
|
||||
|
||||
|
@ -305,7 +304,7 @@ static int rcar_du_probe(struct platform_device *pdev)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
/* Allocate and initialize the DRM and R-Car device structures. */
|
||||
/* Allocate and initialize the R-Car device structure. */
|
||||
rcdu = devm_kzalloc(&pdev->dev, sizeof(*rcdu), GFP_KERNEL);
|
||||
if (rcdu == NULL)
|
||||
return -ENOMEM;
|
||||
|
@ -315,6 +314,15 @@ static int rcar_du_probe(struct platform_device *pdev)
|
|||
rcdu->dev = &pdev->dev;
|
||||
rcdu->info = of_match_device(rcar_du_of_table, rcdu->dev)->data;
|
||||
|
||||
platform_set_drvdata(pdev, rcdu);
|
||||
|
||||
/* I/O resources */
|
||||
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
rcdu->mmio = devm_ioremap_resource(&pdev->dev, mem);
|
||||
if (IS_ERR(rcdu->mmio))
|
||||
return PTR_ERR(rcdu->mmio);
|
||||
|
||||
/* DRM/KMS objects */
|
||||
ddev = drm_dev_alloc(&rcar_du_driver, &pdev->dev);
|
||||
if (IS_ERR(ddev))
|
||||
return PTR_ERR(ddev);
|
||||
|
@ -322,24 +330,6 @@ static int rcar_du_probe(struct platform_device *pdev)
|
|||
rcdu->ddev = ddev;
|
||||
ddev->dev_private = rcdu;
|
||||
|
||||
platform_set_drvdata(pdev, rcdu);
|
||||
|
||||
/* I/O resources */
|
||||
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
rcdu->mmio = devm_ioremap_resource(&pdev->dev, mem);
|
||||
if (IS_ERR(rcdu->mmio)) {
|
||||
ret = PTR_ERR(rcdu->mmio);
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Initialize vertical blanking interrupts handling. Start with vblank
|
||||
* disabled for all CRTCs.
|
||||
*/
|
||||
ret = drm_vblank_init(ddev, (1 << rcdu->info->num_crtcs) - 1);
|
||||
if (ret < 0)
|
||||
goto error;
|
||||
|
||||
/* DRM/KMS objects */
|
||||
ret = rcar_du_modeset_init(rcdu);
|
||||
if (ret < 0) {
|
||||
if (ret != -EPROBE_DEFER)
|
||||
|
|
|
@ -567,6 +567,13 @@ int rcar_du_modeset_init(struct rcar_du_device *rcdu)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Initialize vertical blanking interrupts handling. Start with vblank
|
||||
* disabled for all CRTCs.
|
||||
*/
|
||||
ret = drm_vblank_init(dev, (1 << rcdu->info->num_crtcs) - 1);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Initialize the groups. */
|
||||
num_groups = DIV_ROUND_UP(rcdu->num_crtcs, 2);
|
||||
|
||||
|
|
|
@ -519,7 +519,7 @@ static int vmw_cmd_invalid(struct vmw_private *dev_priv,
|
|||
struct vmw_sw_context *sw_context,
|
||||
SVGA3dCmdHeader *header)
|
||||
{
|
||||
return capable(CAP_SYS_ADMIN) ? : -EINVAL;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int vmw_cmd_ok(struct vmw_private *dev_priv,
|
||||
|
|
|
@ -2484,6 +2484,7 @@ static const struct hid_device_id hid_ignore_list[] = {
|
|||
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PETZL, USB_DEVICE_ID_PETZL_HEADLAMP) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
|
||||
#if IS_ENABLED(CONFIG_MOUSE_SYNAPTICS_USB)
|
||||
|
|
|
@ -819,6 +819,9 @@
|
|||
#define USB_VENDOR_ID_PETALYNX 0x18b1
|
||||
#define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037
|
||||
|
||||
#define USB_VENDOR_ID_PETZL 0x2122
|
||||
#define USB_DEVICE_ID_PETZL_HEADLAMP 0x1234
|
||||
|
||||
#define USB_VENDOR_ID_PHILIPS 0x0471
|
||||
#define USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE 0x0617
|
||||
|
||||
|
|
|
@ -2342,8 +2342,9 @@ ssize_t ib_uverbs_modify_qp(struct ib_uverbs_file *file,
|
|||
if (copy_from_user(&cmd, buf, sizeof cmd))
|
||||
return -EFAULT;
|
||||
|
||||
if (cmd.port_num < rdma_start_port(ib_dev) ||
|
||||
cmd.port_num > rdma_end_port(ib_dev))
|
||||
if ((cmd.attr_mask & IB_QP_PORT) &&
|
||||
(cmd.port_num < rdma_start_port(ib_dev) ||
|
||||
cmd.port_num > rdma_end_port(ib_dev)))
|
||||
return -EINVAL;
|
||||
|
||||
INIT_UDATA(&udata, buf + sizeof cmd, NULL, in_len - sizeof cmd,
|
||||
|
|
|
@ -1680,9 +1680,19 @@ static int __mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_att
|
|||
size += ret;
|
||||
}
|
||||
|
||||
if (mlx4_is_master(mdev->dev) && flow_type == MLX4_FS_REGULAR &&
|
||||
flow_attr->num_of_specs == 1) {
|
||||
struct _rule_hw *rule_header = (struct _rule_hw *)(ctrl + 1);
|
||||
enum ib_flow_spec_type header_spec =
|
||||
((union ib_flow_spec *)(flow_attr + 1))->type;
|
||||
|
||||
if (header_spec == IB_FLOW_SPEC_ETH)
|
||||
mlx4_handle_eth_header_mcast_prio(ctrl, rule_header);
|
||||
}
|
||||
|
||||
ret = mlx4_cmd_imm(mdev->dev, mailbox->dma, reg_id, size >> 2, 0,
|
||||
MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A,
|
||||
MLX4_CMD_WRAPPED);
|
||||
MLX4_CMD_NATIVE);
|
||||
if (ret == -ENOMEM)
|
||||
pr_err("mcg table is full. Fail to register network rule.\n");
|
||||
else if (ret == -ENXIO)
|
||||
|
@ -1699,7 +1709,7 @@ static int __mlx4_ib_destroy_flow(struct mlx4_dev *dev, u64 reg_id)
|
|||
int err;
|
||||
err = mlx4_cmd(dev, reg_id, 0, 0,
|
||||
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
|
||||
MLX4_CMD_WRAPPED);
|
||||
MLX4_CMD_NATIVE);
|
||||
if (err)
|
||||
pr_err("Fail to detach network rule. registration id = 0x%llx\n",
|
||||
reg_id);
|
||||
|
|
|
@ -19,9 +19,9 @@
|
|||
#include <linux/bitops.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/moduleparam.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/irqchip.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
|
@ -39,6 +39,7 @@ struct keystone_irq_device {
|
|||
struct irq_domain *irqd;
|
||||
struct regmap *devctrl_regs;
|
||||
u32 devctrl_offset;
|
||||
raw_spinlock_t wa_lock;
|
||||
};
|
||||
|
||||
static inline u32 keystone_irq_readl(struct keystone_irq_device *kirq)
|
||||
|
@ -83,17 +84,15 @@ static void keystone_irq_ack(struct irq_data *d)
|
|||
/* nothing to do here */
|
||||
}
|
||||
|
||||
static void keystone_irq_handler(struct irq_desc *desc)
|
||||
static irqreturn_t keystone_irq_handler(int irq, void *keystone_irq)
|
||||
{
|
||||
unsigned int irq = irq_desc_get_irq(desc);
|
||||
struct keystone_irq_device *kirq = irq_desc_get_handler_data(desc);
|
||||
struct keystone_irq_device *kirq = keystone_irq;
|
||||
unsigned long wa_lock_flags;
|
||||
unsigned long pending;
|
||||
int src, virq;
|
||||
|
||||
dev_dbg(kirq->dev, "start irq %d\n", irq);
|
||||
|
||||
chained_irq_enter(irq_desc_get_chip(desc), desc);
|
||||
|
||||
pending = keystone_irq_readl(kirq);
|
||||
keystone_irq_writel(kirq, pending);
|
||||
|
||||
|
@ -111,13 +110,15 @@ static void keystone_irq_handler(struct irq_desc *desc)
|
|||
if (!virq)
|
||||
dev_warn(kirq->dev, "spurious irq detected hwirq %d, virq %d\n",
|
||||
src, virq);
|
||||
raw_spin_lock_irqsave(&kirq->wa_lock, wa_lock_flags);
|
||||
generic_handle_irq(virq);
|
||||
raw_spin_unlock_irqrestore(&kirq->wa_lock,
|
||||
wa_lock_flags);
|
||||
}
|
||||
}
|
||||
|
||||
chained_irq_exit(irq_desc_get_chip(desc), desc);
|
||||
|
||||
dev_dbg(kirq->dev, "end irq %d\n", irq);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int keystone_irq_map(struct irq_domain *h, unsigned int virq,
|
||||
|
@ -182,9 +183,16 @@ static int keystone_irq_probe(struct platform_device *pdev)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
raw_spin_lock_init(&kirq->wa_lock);
|
||||
|
||||
platform_set_drvdata(pdev, kirq);
|
||||
|
||||
irq_set_chained_handler_and_data(kirq->irq, keystone_irq_handler, kirq);
|
||||
ret = request_irq(kirq->irq, keystone_irq_handler,
|
||||
0, dev_name(dev), kirq);
|
||||
if (ret) {
|
||||
irq_domain_remove(kirq->irqd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* clear all source bits */
|
||||
keystone_irq_writel(kirq, ~0x0);
|
||||
|
@ -199,6 +207,8 @@ static int keystone_irq_remove(struct platform_device *pdev)
|
|||
struct keystone_irq_device *kirq = platform_get_drvdata(pdev);
|
||||
int hwirq;
|
||||
|
||||
free_irq(kirq->irq, kirq);
|
||||
|
||||
for (hwirq = 0; hwirq < KEYSTONE_N_IRQ; hwirq++)
|
||||
irq_dispose_mapping(irq_find_mapping(kirq->irqd, hwirq));
|
||||
|
||||
|
|
|
@ -131,12 +131,16 @@ static struct irq_chip mxs_icoll_chip = {
|
|||
.irq_ack = icoll_ack_irq,
|
||||
.irq_mask = icoll_mask_irq,
|
||||
.irq_unmask = icoll_unmask_irq,
|
||||
.flags = IRQCHIP_MASK_ON_SUSPEND |
|
||||
IRQCHIP_SKIP_SET_WAKE,
|
||||
};
|
||||
|
||||
static struct irq_chip asm9260_icoll_chip = {
|
||||
.irq_ack = icoll_ack_irq,
|
||||
.irq_mask = asm9260_mask_irq,
|
||||
.irq_unmask = asm9260_unmask_irq,
|
||||
.flags = IRQCHIP_MASK_ON_SUSPEND |
|
||||
IRQCHIP_SKIP_SET_WAKE,
|
||||
};
|
||||
|
||||
asmlinkage void __exception_irq_entry icoll_handle_irq(struct pt_regs *regs)
|
||||
|
|
|
@ -1379,6 +1379,7 @@ isdn_ioctl(struct file *file, uint cmd, ulong arg)
|
|||
if (arg) {
|
||||
if (copy_from_user(bname, argp, sizeof(bname) - 1))
|
||||
return -EFAULT;
|
||||
bname[sizeof(bname)-1] = 0;
|
||||
} else
|
||||
return -EINVAL;
|
||||
ret = mutex_lock_interruptible(&dev->mtx);
|
||||
|
|
|
@ -2611,10 +2611,9 @@ isdn_net_newslave(char *parm)
|
|||
char newname[10];
|
||||
|
||||
if (p) {
|
||||
/* Slave-Name MUST not be empty */
|
||||
if (!strlen(p + 1))
|
||||
/* Slave-Name MUST not be empty or overflow 'newname' */
|
||||
if (strscpy(newname, p + 1, sizeof(newname)) <= 0)
|
||||
return NULL;
|
||||
strcpy(newname, p + 1);
|
||||
*p = 0;
|
||||
/* Master must already exist */
|
||||
if (!(n = isdn_net_findif(parm)))
|
||||
|
|
|
@ -2364,7 +2364,7 @@ static struct ippp_ccp_reset_state *isdn_ppp_ccp_reset_alloc_state(struct ippp_s
|
|||
id);
|
||||
return NULL;
|
||||
} else {
|
||||
rs = kzalloc(sizeof(struct ippp_ccp_reset_state), GFP_KERNEL);
|
||||
rs = kzalloc(sizeof(struct ippp_ccp_reset_state), GFP_ATOMIC);
|
||||
if (!rs)
|
||||
return NULL;
|
||||
rs->state = CCPResetIdle;
|
||||
|
|
|
@ -104,11 +104,14 @@ static void tx_tick(struct mbox_chan *chan, int r)
|
|||
/* Submit next message */
|
||||
msg_submit(chan);
|
||||
|
||||
if (!mssg)
|
||||
return;
|
||||
|
||||
/* Notify the client */
|
||||
if (mssg && chan->cl->tx_done)
|
||||
if (chan->cl->tx_done)
|
||||
chan->cl->tx_done(chan->cl, mssg, r);
|
||||
|
||||
if (chan->cl->tx_block)
|
||||
if (r != -ETIME && chan->cl->tx_block)
|
||||
complete(&chan->tx_complete);
|
||||
}
|
||||
|
||||
|
@ -261,7 +264,7 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg)
|
|||
|
||||
msg_submit(chan);
|
||||
|
||||
if (chan->cl->tx_block && chan->active_req) {
|
||||
if (chan->cl->tx_block) {
|
||||
unsigned long wait;
|
||||
int ret;
|
||||
|
||||
|
@ -272,8 +275,8 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg)
|
|||
|
||||
ret = wait_for_completion_timeout(&chan->tx_complete, wait);
|
||||
if (ret == 0) {
|
||||
t = -EIO;
|
||||
tx_tick(chan, -EIO);
|
||||
t = -ETIME;
|
||||
tx_tick(chan, t);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -5843,6 +5843,8 @@ static void raid5_do_work(struct work_struct *work)
|
|||
pr_debug("%d stripes handled\n", handled);
|
||||
|
||||
spin_unlock_irq(&conf->device_lock);
|
||||
|
||||
async_tx_issue_pending_all();
|
||||
blk_finish_plug(&plug);
|
||||
|
||||
pr_debug("--- raid5worker inactive\n");
|
||||
|
|
|
@ -211,7 +211,7 @@ static int s5c73m3_3a_lock(struct s5c73m3 *state, struct v4l2_ctrl *ctrl)
|
|||
}
|
||||
|
||||
if ((ctrl->val ^ ctrl->cur.val) & V4L2_LOCK_FOCUS)
|
||||
ret = s5c73m3_af_run(state, ~af_lock);
|
||||
ret = s5c73m3_af_run(state, !af_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -5186,7 +5186,9 @@ static netdev_features_t be_features_check(struct sk_buff *skb,
|
|||
skb->inner_protocol_type != ENCAP_TYPE_ETHER ||
|
||||
skb->inner_protocol != htons(ETH_P_TEB) ||
|
||||
skb_inner_mac_header(skb) - skb_transport_header(skb) !=
|
||||
sizeof(struct udphdr) + sizeof(struct vxlanhdr))
|
||||
sizeof(struct udphdr) + sizeof(struct vxlanhdr) ||
|
||||
!adapter->vxlan_port ||
|
||||
udp_hdr(skb)->dest != adapter->vxlan_port)
|
||||
return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
|
||||
|
||||
return features;
|
||||
|
|
|
@ -118,8 +118,13 @@ static int mlx4_alloc_icm_coherent(struct device *dev, struct scatterlist *mem,
|
|||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
if (offset_in_page(buf)) {
|
||||
dma_free_coherent(dev, PAGE_SIZE << order,
|
||||
buf, sg_dma_address(mem));
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sg_set_buf(mem, buf, PAGE_SIZE << order);
|
||||
BUG_ON(mem->offset);
|
||||
sg_dma_len(mem) = PAGE_SIZE << order;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -42,6 +42,7 @@
|
|||
#include <linux/io-mapping.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kmod.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <net/devlink.h>
|
||||
|
||||
#include <linux/mlx4/device.h>
|
||||
|
@ -782,6 +783,23 @@ int mlx4_is_slave_active(struct mlx4_dev *dev, int slave)
|
|||
}
|
||||
EXPORT_SYMBOL(mlx4_is_slave_active);
|
||||
|
||||
void mlx4_handle_eth_header_mcast_prio(struct mlx4_net_trans_rule_hw_ctrl *ctrl,
|
||||
struct _rule_hw *eth_header)
|
||||
{
|
||||
if (is_multicast_ether_addr(eth_header->eth.dst_mac) ||
|
||||
is_broadcast_ether_addr(eth_header->eth.dst_mac)) {
|
||||
struct mlx4_net_trans_rule_hw_eth *eth =
|
||||
(struct mlx4_net_trans_rule_hw_eth *)eth_header;
|
||||
struct _rule_hw *next_rule = (struct _rule_hw *)(eth + 1);
|
||||
bool last_rule = next_rule->size == 0 && next_rule->id == 0 &&
|
||||
next_rule->rsvd == 0;
|
||||
|
||||
if (last_rule)
|
||||
ctrl->prio = cpu_to_be16(MLX4_DOMAIN_NIC);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(mlx4_handle_eth_header_mcast_prio);
|
||||
|
||||
static void slave_adjust_steering_mode(struct mlx4_dev *dev,
|
||||
struct mlx4_dev_cap *dev_cap,
|
||||
struct mlx4_init_hca_param *hca_param)
|
||||
|
|
|
@ -4165,22 +4165,6 @@ static int validate_eth_header_mac(int slave, struct _rule_hw *eth_header,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void handle_eth_header_mcast_prio(struct mlx4_net_trans_rule_hw_ctrl *ctrl,
|
||||
struct _rule_hw *eth_header)
|
||||
{
|
||||
if (is_multicast_ether_addr(eth_header->eth.dst_mac) ||
|
||||
is_broadcast_ether_addr(eth_header->eth.dst_mac)) {
|
||||
struct mlx4_net_trans_rule_hw_eth *eth =
|
||||
(struct mlx4_net_trans_rule_hw_eth *)eth_header;
|
||||
struct _rule_hw *next_rule = (struct _rule_hw *)(eth + 1);
|
||||
bool last_rule = next_rule->size == 0 && next_rule->id == 0 &&
|
||||
next_rule->rsvd == 0;
|
||||
|
||||
if (last_rule)
|
||||
ctrl->prio = cpu_to_be16(MLX4_DOMAIN_NIC);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* In case of missing eth header, append eth header with a MAC address
|
||||
* assigned to the VF.
|
||||
|
@ -4364,10 +4348,7 @@ int mlx4_QP_FLOW_STEERING_ATTACH_wrapper(struct mlx4_dev *dev, int slave,
|
|||
header_id = map_hw_to_sw_id(be16_to_cpu(rule_header->id));
|
||||
|
||||
if (header_id == MLX4_NET_TRANS_RULE_ID_ETH)
|
||||
handle_eth_header_mcast_prio(ctrl, rule_header);
|
||||
|
||||
if (slave == dev->caps.function)
|
||||
goto execute;
|
||||
mlx4_handle_eth_header_mcast_prio(ctrl, rule_header);
|
||||
|
||||
switch (header_id) {
|
||||
case MLX4_NET_TRANS_RULE_ID_ETH:
|
||||
|
@ -4395,7 +4376,6 @@ int mlx4_QP_FLOW_STEERING_ATTACH_wrapper(struct mlx4_dev *dev, int slave,
|
|||
goto err_put_qp;
|
||||
}
|
||||
|
||||
execute:
|
||||
err = mlx4_cmd_imm(dev, inbox->dma, &vhcr->out_param,
|
||||
vhcr->in_modifier, 0,
|
||||
MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A,
|
||||
|
@ -4474,6 +4454,7 @@ int mlx4_QP_FLOW_STEERING_DETACH_wrapper(struct mlx4_dev *dev, int slave,
|
|||
struct res_qp *rqp;
|
||||
struct res_fs_rule *rrule;
|
||||
u64 mirr_reg_id;
|
||||
int qpn;
|
||||
|
||||
if (dev->caps.steering_mode !=
|
||||
MLX4_STEERING_MODE_DEVICE_MANAGED)
|
||||
|
@ -4490,10 +4471,11 @@ int mlx4_QP_FLOW_STEERING_DETACH_wrapper(struct mlx4_dev *dev, int slave,
|
|||
}
|
||||
mirr_reg_id = rrule->mirr_rule_id;
|
||||
kfree(rrule->mirr_mbox);
|
||||
qpn = rrule->qpn;
|
||||
|
||||
/* Release the rule form busy state before removal */
|
||||
put_res(dev, slave, vhcr->in_param, RES_FS_RULE);
|
||||
err = get_res(dev, slave, rrule->qpn, RES_QP, &rqp);
|
||||
err = get_res(dev, slave, qpn, RES_QP, &rqp);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -4518,7 +4500,7 @@ int mlx4_QP_FLOW_STEERING_DETACH_wrapper(struct mlx4_dev *dev, int slave,
|
|||
if (!err)
|
||||
atomic_dec(&rqp->ref_count);
|
||||
out:
|
||||
put_res(dev, slave, rrule->qpn, RES_QP);
|
||||
put_res(dev, slave, qpn, RES_QP);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -672,6 +672,12 @@ int esw_offloads_init(struct mlx5_eswitch *esw, int nvports)
|
|||
if (err)
|
||||
goto err_reps;
|
||||
}
|
||||
|
||||
/* disable PF RoCE so missed packets don't go through RoCE steering */
|
||||
mlx5_dev_list_lock();
|
||||
mlx5_remove_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
|
||||
mlx5_dev_list_unlock();
|
||||
|
||||
return 0;
|
||||
|
||||
err_reps:
|
||||
|
@ -695,6 +701,11 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw)
|
|||
{
|
||||
int err, err1, num_vfs = esw->dev->priv.sriov.num_vfs;
|
||||
|
||||
/* enable back PF RoCE */
|
||||
mlx5_dev_list_lock();
|
||||
mlx5_add_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
|
||||
mlx5_dev_list_unlock();
|
||||
|
||||
mlx5_eswitch_disable_sriov(esw);
|
||||
err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_LEGACY);
|
||||
if (err) {
|
||||
|
|
|
@ -326,6 +326,7 @@ enum cfg_version {
|
|||
static const struct pci_device_id rtl8169_pci_tbl[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8161), 0, 0, RTL_CFG_1 },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
|
||||
|
|
|
@ -819,6 +819,7 @@ static struct sh_eth_cpu_data sh7734_data = {
|
|||
.tsu = 1,
|
||||
.hw_crc = 1,
|
||||
.select_mii = 1,
|
||||
.shift_rd0 = 1,
|
||||
};
|
||||
|
||||
/* SH7763 */
|
||||
|
|
|
@ -1792,7 +1792,7 @@ static struct phy_driver genphy_driver[] = {
|
|||
.phy_id = 0xffffffff,
|
||||
.phy_id_mask = 0xffffffff,
|
||||
.name = "Generic PHY",
|
||||
.soft_reset = genphy_soft_reset,
|
||||
.soft_reset = genphy_no_soft_reset,
|
||||
.config_init = genphy_config_init,
|
||||
.features = PHY_GBIT_FEATURES | SUPPORTED_MII |
|
||||
SUPPORTED_AUI | SUPPORTED_FIBRE |
|
||||
|
|
|
@ -1369,6 +1369,7 @@ static struct usb_driver asix_driver = {
|
|||
.probe = usbnet_probe,
|
||||
.suspend = asix_suspend,
|
||||
.resume = asix_resume,
|
||||
.reset_resume = asix_resume,
|
||||
.disconnect = usbnet_disconnect,
|
||||
.supports_autosuspend = 1,
|
||||
.disable_hub_initiated_lpm = 1,
|
||||
|
|
|
@ -660,6 +660,9 @@ ath10k_wmi_vdev_spectral_conf(struct ath10k *ar,
|
|||
struct sk_buff *skb;
|
||||
u32 cmd_id;
|
||||
|
||||
if (!ar->wmi.ops->gen_vdev_spectral_conf)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
skb = ar->wmi.ops->gen_vdev_spectral_conf(ar, arg);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
|
@ -675,6 +678,9 @@ ath10k_wmi_vdev_spectral_enable(struct ath10k *ar, u32 vdev_id, u32 trigger,
|
|||
struct sk_buff *skb;
|
||||
u32 cmd_id;
|
||||
|
||||
if (!ar->wmi.ops->gen_vdev_spectral_enable)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
skb = ar->wmi.ops->gen_vdev_spectral_enable(ar, vdev_id, trigger,
|
||||
enable);
|
||||
if (IS_ERR(skb))
|
||||
|
|
|
@ -384,18 +384,19 @@ static void wil_fw_error_worker(struct work_struct *work)
|
|||
|
||||
wil->last_fw_recovery = jiffies;
|
||||
|
||||
wil_info(wil, "fw error recovery requested (try %d)...\n",
|
||||
wil->recovery_count);
|
||||
if (!no_fw_recovery)
|
||||
wil->recovery_state = fw_recovery_running;
|
||||
if (wil_wait_for_recovery(wil) != 0)
|
||||
return;
|
||||
|
||||
mutex_lock(&wil->mutex);
|
||||
switch (wdev->iftype) {
|
||||
case NL80211_IFTYPE_STATION:
|
||||
case NL80211_IFTYPE_P2P_CLIENT:
|
||||
case NL80211_IFTYPE_MONITOR:
|
||||
wil_info(wil, "fw error recovery requested (try %d)...\n",
|
||||
wil->recovery_count);
|
||||
if (!no_fw_recovery)
|
||||
wil->recovery_state = fw_recovery_running;
|
||||
if (0 != wil_wait_for_recovery(wil))
|
||||
break;
|
||||
|
||||
/* silent recovery, upper layers will see disconnect */
|
||||
__wil_down(wil);
|
||||
__wil_up(wil);
|
||||
break;
|
||||
|
|
|
@ -210,14 +210,14 @@ static irqreturn_t fdp_nci_i2c_irq_thread_fn(int irq, void *phy_id)
|
|||
struct sk_buff *skb;
|
||||
int r;
|
||||
|
||||
client = phy->i2c_dev;
|
||||
dev_dbg(&client->dev, "%s\n", __func__);
|
||||
|
||||
if (!phy || irq != phy->i2c_dev->irq) {
|
||||
WARN_ON_ONCE(1);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
client = phy->i2c_dev;
|
||||
dev_dbg(&client->dev, "%s\n", __func__);
|
||||
|
||||
r = fdp_nci_i2c_read(phy, &skb);
|
||||
|
||||
if (r == -EREMOTEIO)
|
||||
|
|
|
@ -725,23 +725,33 @@ static int port100_submit_urb_for_ack(struct port100 *dev, gfp_t flags)
|
|||
|
||||
static int port100_send_ack(struct port100 *dev)
|
||||
{
|
||||
int rc;
|
||||
int rc = 0;
|
||||
|
||||
mutex_lock(&dev->out_urb_lock);
|
||||
|
||||
init_completion(&dev->cmd_cancel_done);
|
||||
|
||||
usb_kill_urb(dev->out_urb);
|
||||
|
||||
dev->out_urb->transfer_buffer = ack_frame;
|
||||
dev->out_urb->transfer_buffer_length = sizeof(ack_frame);
|
||||
rc = usb_submit_urb(dev->out_urb, GFP_KERNEL);
|
||||
|
||||
/* Set the cmd_cancel flag only if the URB has been successfully
|
||||
* submitted. It will be reset by the out URB completion callback
|
||||
* port100_send_complete().
|
||||
/*
|
||||
* If prior cancel is in-flight (dev->cmd_cancel == true), we
|
||||
* can skip to send cancel. Then this will wait the prior
|
||||
* cancel, or merged into the next cancel rarely if next
|
||||
* cancel was started before waiting done. In any case, this
|
||||
* will be waked up soon or later.
|
||||
*/
|
||||
dev->cmd_cancel = !rc;
|
||||
if (!dev->cmd_cancel) {
|
||||
reinit_completion(&dev->cmd_cancel_done);
|
||||
|
||||
usb_kill_urb(dev->out_urb);
|
||||
|
||||
dev->out_urb->transfer_buffer = ack_frame;
|
||||
dev->out_urb->transfer_buffer_length = sizeof(ack_frame);
|
||||
rc = usb_submit_urb(dev->out_urb, GFP_KERNEL);
|
||||
|
||||
/*
|
||||
* Set the cmd_cancel flag only if the URB has been
|
||||
* successfully submitted. It will be reset by the out
|
||||
* URB completion callback port100_send_complete().
|
||||
*/
|
||||
dev->cmd_cancel = !rc;
|
||||
}
|
||||
|
||||
mutex_unlock(&dev->out_urb_lock);
|
||||
|
||||
|
@ -928,8 +938,8 @@ static void port100_send_complete(struct urb *urb)
|
|||
struct port100 *dev = urb->context;
|
||||
|
||||
if (dev->cmd_cancel) {
|
||||
complete_all(&dev->cmd_cancel_done);
|
||||
dev->cmd_cancel = false;
|
||||
complete(&dev->cmd_cancel_done);
|
||||
}
|
||||
|
||||
switch (urb->status) {
|
||||
|
@ -1543,6 +1553,7 @@ static int port100_probe(struct usb_interface *interface,
|
|||
PORT100_COMM_RF_HEAD_MAX_LEN;
|
||||
dev->skb_tailroom = PORT100_FRAME_TAIL_LEN;
|
||||
|
||||
init_completion(&dev->cmd_cancel_done);
|
||||
INIT_WORK(&dev->cmd_complete_work, port100_wq_cmd_complete);
|
||||
|
||||
/* The first thing to do with the Port-100 is to set the command type
|
||||
|
|
|
@ -71,7 +71,7 @@ static struct nvmem_config imx_ocotp_nvmem_config = {
|
|||
|
||||
static const struct of_device_id imx_ocotp_dt_ids[] = {
|
||||
{ .compatible = "fsl,imx6q-ocotp", (void *)128 },
|
||||
{ .compatible = "fsl,imx6sl-ocotp", (void *)32 },
|
||||
{ .compatible = "fsl,imx6sl-ocotp", (void *)64 },
|
||||
{ .compatible = "fsl,imx6sx-ocotp", (void *)128 },
|
||||
{ },
|
||||
};
|
||||
|
|
|
@ -64,9 +64,9 @@ int max_rport_logins = BFA_FCS_MAX_RPORT_LOGINS;
|
|||
u32 bfi_image_cb_size, bfi_image_ct_size, bfi_image_ct2_size;
|
||||
u32 *bfi_image_cb, *bfi_image_ct, *bfi_image_ct2;
|
||||
|
||||
#define BFAD_FW_FILE_CB "cbfw-3.2.3.0.bin"
|
||||
#define BFAD_FW_FILE_CT "ctfw-3.2.3.0.bin"
|
||||
#define BFAD_FW_FILE_CT2 "ct2fw-3.2.3.0.bin"
|
||||
#define BFAD_FW_FILE_CB "cbfw-3.2.5.1.bin"
|
||||
#define BFAD_FW_FILE_CT "ctfw-3.2.5.1.bin"
|
||||
#define BFAD_FW_FILE_CT2 "ct2fw-3.2.5.1.bin"
|
||||
|
||||
static u32 *bfad_load_fwimg(struct pci_dev *pdev);
|
||||
static void bfad_free_fwimg(void);
|
||||
|
|
|
@ -58,7 +58,7 @@
|
|||
#ifdef BFA_DRIVER_VERSION
|
||||
#define BFAD_DRIVER_VERSION BFA_DRIVER_VERSION
|
||||
#else
|
||||
#define BFAD_DRIVER_VERSION "3.2.25.0"
|
||||
#define BFAD_DRIVER_VERSION "3.2.25.1"
|
||||
#endif
|
||||
|
||||
#define BFAD_PROTO_NAME FCPI_NAME
|
||||
|
|
|
@ -248,6 +248,7 @@ struct fnic {
|
|||
struct completion *remove_wait; /* device remove thread blocks */
|
||||
|
||||
atomic_t in_flight; /* io counter */
|
||||
bool internal_reset_inprogress;
|
||||
u32 _reserved; /* fill hole */
|
||||
unsigned long state_flags; /* protected by host lock */
|
||||
enum fnic_state state;
|
||||
|
|
|
@ -2573,6 +2573,19 @@ int fnic_host_reset(struct scsi_cmnd *sc)
|
|||
unsigned long wait_host_tmo;
|
||||
struct Scsi_Host *shost = sc->device->host;
|
||||
struct fc_lport *lp = shost_priv(shost);
|
||||
struct fnic *fnic = lport_priv(lp);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&fnic->fnic_lock, flags);
|
||||
if (fnic->internal_reset_inprogress == 0) {
|
||||
fnic->internal_reset_inprogress = 1;
|
||||
} else {
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
|
||||
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
|
||||
"host reset in progress skipping another host reset\n");
|
||||
return SUCCESS;
|
||||
}
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
|
||||
|
||||
/*
|
||||
* If fnic_reset is successful, wait for fabric login to complete
|
||||
|
@ -2593,6 +2606,9 @@ int fnic_host_reset(struct scsi_cmnd *sc)
|
|||
}
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&fnic->fnic_lock, flags);
|
||||
fnic->internal_reset_inprogress = 0;
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -591,6 +591,7 @@ snic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (!pool) {
|
||||
SNIC_HOST_ERR(shost, "dflt sgl pool creation failed\n");
|
||||
|
||||
ret = -ENOMEM;
|
||||
goto err_free_res;
|
||||
}
|
||||
|
||||
|
@ -601,6 +602,7 @@ snic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (!pool) {
|
||||
SNIC_HOST_ERR(shost, "max sgl pool creation failed\n");
|
||||
|
||||
ret = -ENOMEM;
|
||||
goto err_free_dflt_sgl_pool;
|
||||
}
|
||||
|
||||
|
@ -611,6 +613,7 @@ snic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (!pool) {
|
||||
SNIC_HOST_ERR(shost, "snic tmreq info pool creation failed.\n");
|
||||
|
||||
ret = -ENOMEM;
|
||||
goto err_free_max_sgl_pool;
|
||||
}
|
||||
|
||||
|
|
|
@ -107,7 +107,10 @@ static const struct file_operations dw_spi_regs_ops = {
|
|||
|
||||
static int dw_spi_debugfs_init(struct dw_spi *dws)
|
||||
{
|
||||
dws->debugfs = debugfs_create_dir("dw_spi", NULL);
|
||||
char name[128];
|
||||
|
||||
snprintf(name, 128, "dw_spi-%s", dev_name(&dws->master->dev));
|
||||
dws->debugfs = debugfs_create_dir(name, NULL);
|
||||
if (!dws->debugfs)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -2898,9 +2898,6 @@ static int __init comedi_init(void)
|
|||
|
||||
comedi_class->dev_groups = comedi_dev_groups;
|
||||
|
||||
/* XXX requires /proc interface */
|
||||
comedi_proc_init();
|
||||
|
||||
/* create devices files for legacy/manual use */
|
||||
for (i = 0; i < comedi_num_legacy_minors; i++) {
|
||||
struct comedi_device *dev;
|
||||
|
@ -2918,6 +2915,9 @@ static int __init comedi_init(void)
|
|||
mutex_unlock(&dev->mutex);
|
||||
}
|
||||
|
||||
/* XXX requires /proc interface */
|
||||
comedi_proc_init();
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(comedi_init);
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/platform_data/dwc3-omap.h>
|
||||
|
@ -511,7 +512,7 @@ static int dwc3_omap_probe(struct platform_device *pdev)
|
|||
|
||||
/* check the DMA Status */
|
||||
reg = dwc3_omap_readl(omap->base, USBOTGSS_SYSCONFIG);
|
||||
|
||||
irq_set_status_flags(omap->irq, IRQ_NOAUTOEN);
|
||||
ret = devm_request_threaded_irq(dev, omap->irq, dwc3_omap_interrupt,
|
||||
dwc3_omap_interrupt_thread, IRQF_SHARED,
|
||||
"dwc3-omap", omap);
|
||||
|
@ -532,7 +533,7 @@ static int dwc3_omap_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
dwc3_omap_enable_irqs(omap);
|
||||
|
||||
enable_irq(omap->irq);
|
||||
return 0;
|
||||
|
||||
err2:
|
||||
|
@ -553,6 +554,7 @@ static int dwc3_omap_remove(struct platform_device *pdev)
|
|||
extcon_unregister_notifier(omap->edev, EXTCON_USB, &omap->vbus_nb);
|
||||
extcon_unregister_notifier(omap->edev, EXTCON_USB_HOST, &omap->id_nb);
|
||||
dwc3_omap_disable_irqs(omap);
|
||||
disable_irq(omap->irq);
|
||||
of_platform_depopulate(omap->dev);
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
|
|
|
@ -582,7 +582,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
|
|||
}
|
||||
status = usb_ep_enable(hidg->out_ep);
|
||||
if (status < 0) {
|
||||
ERROR(cdev, "Enable IN endpoint FAILED!\n");
|
||||
ERROR(cdev, "Enable OUT endpoint FAILED!\n");
|
||||
goto fail;
|
||||
}
|
||||
hidg->out_ep->driver_data = hidg;
|
||||
|
|
|
@ -1173,6 +1173,10 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
|
|||
return ret;
|
||||
|
||||
vdev->barmap[index] = pci_iomap(pdev, index, 0);
|
||||
if (!vdev->barmap[index]) {
|
||||
pci_release_selected_regions(pdev, 1 << index);
|
||||
return -ENOMEM;
|
||||
}
|
||||
}
|
||||
|
||||
vma->vm_private_data = vdev;
|
||||
|
|
|
@ -193,7 +193,10 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||
if (!vdev->has_vga)
|
||||
return -EINVAL;
|
||||
|
||||
switch (pos) {
|
||||
if (pos > 0xbfffful)
|
||||
return -EINVAL;
|
||||
|
||||
switch ((u32)pos) {
|
||||
case 0xa0000 ... 0xbffff:
|
||||
count = min(count, (size_t)(0xc0000 - pos));
|
||||
iomem = ioremap_nocache(0xa0000, 0xbffff - 0xa0000 + 1);
|
||||
|
|
|
@ -308,6 +308,11 @@ static int cobalt_lcdfb_probe(struct platform_device *dev)
|
|||
info->screen_size = resource_size(res);
|
||||
info->screen_base = devm_ioremap(&dev->dev, res->start,
|
||||
info->screen_size);
|
||||
if (!info->screen_base) {
|
||||
framebuffer_release(info);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
info->fbops = &cobalt_lcd_fbops;
|
||||
info->fix = cobalt_lcdfb_fix;
|
||||
info->fix.smem_start = res->start;
|
||||
|
|
|
@ -58,9 +58,13 @@ static int xen_map_device_mmio(const struct resource *resources,
|
|||
xen_pfn_t *gpfns;
|
||||
xen_ulong_t *idxs;
|
||||
int *errs;
|
||||
struct xen_add_to_physmap_range xatp;
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
struct xen_add_to_physmap_range xatp = {
|
||||
.domid = DOMID_SELF,
|
||||
.space = XENMAPSPACE_dev_mmio
|
||||
};
|
||||
|
||||
r = &resources[i];
|
||||
nr = DIV_ROUND_UP(resource_size(r), XEN_PAGE_SIZE);
|
||||
if ((resource_type(r) != IORESOURCE_MEM) || (nr == 0))
|
||||
|
@ -87,9 +91,7 @@ static int xen_map_device_mmio(const struct resource *resources,
|
|||
idxs[j] = XEN_PFN_DOWN(r->start) + j;
|
||||
}
|
||||
|
||||
xatp.domid = DOMID_SELF;
|
||||
xatp.size = nr;
|
||||
xatp.space = XENMAPSPACE_dev_mmio;
|
||||
|
||||
set_xen_guest_handle(xatp.gpfns, gpfns);
|
||||
set_xen_guest_handle(xatp.idxs, idxs);
|
||||
|
|
|
@ -7401,7 +7401,8 @@ btrfs_lock_cluster(struct btrfs_block_group_cache *block_group,
|
|||
|
||||
spin_unlock(&cluster->refill_lock);
|
||||
|
||||
down_read(&used_bg->data_rwsem);
|
||||
/* We should only have one-level nested. */
|
||||
down_read_nested(&used_bg->data_rwsem, SINGLE_DEPTH_NESTING);
|
||||
|
||||
spin_lock(&cluster->refill_lock);
|
||||
if (used_bg == cluster->block_group)
|
||||
|
|
|
@ -7648,11 +7648,18 @@ static void adjust_dio_outstanding_extents(struct inode *inode,
|
|||
* within our reservation, otherwise we need to adjust our inode
|
||||
* counter appropriately.
|
||||
*/
|
||||
if (dio_data->outstanding_extents) {
|
||||
if (dio_data->outstanding_extents >= num_extents) {
|
||||
dio_data->outstanding_extents -= num_extents;
|
||||
} else {
|
||||
/*
|
||||
* If dio write length has been split due to no large enough
|
||||
* contiguous space, we need to compensate our inode counter
|
||||
* appropriately.
|
||||
*/
|
||||
u64 num_needed = num_extents - dio_data->outstanding_extents;
|
||||
|
||||
spin_lock(&BTRFS_I(inode)->lock);
|
||||
BTRFS_I(inode)->outstanding_extents += num_extents;
|
||||
BTRFS_I(inode)->outstanding_extents += num_needed;
|
||||
spin_unlock(&BTRFS_I(inode)->lock);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -37,6 +37,7 @@
|
|||
*/
|
||||
#define LOG_INODE_ALL 0
|
||||
#define LOG_INODE_EXISTS 1
|
||||
#define LOG_OTHER_INODE 2
|
||||
|
||||
/*
|
||||
* directory trouble cases
|
||||
|
@ -4623,7 +4624,7 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
|
|||
if (S_ISDIR(inode->i_mode) ||
|
||||
(!test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
|
||||
&BTRFS_I(inode)->runtime_flags) &&
|
||||
inode_only == LOG_INODE_EXISTS))
|
||||
inode_only >= LOG_INODE_EXISTS))
|
||||
max_key.type = BTRFS_XATTR_ITEM_KEY;
|
||||
else
|
||||
max_key.type = (u8)-1;
|
||||
|
@ -4647,7 +4648,13 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
|
|||
return ret;
|
||||
}
|
||||
|
||||
mutex_lock(&BTRFS_I(inode)->log_mutex);
|
||||
if (inode_only == LOG_OTHER_INODE) {
|
||||
inode_only = LOG_INODE_EXISTS;
|
||||
mutex_lock_nested(&BTRFS_I(inode)->log_mutex,
|
||||
SINGLE_DEPTH_NESTING);
|
||||
} else {
|
||||
mutex_lock(&BTRFS_I(inode)->log_mutex);
|
||||
}
|
||||
|
||||
/*
|
||||
* a brute force approach to making sure we get the most uptodate
|
||||
|
@ -4799,7 +4806,7 @@ again:
|
|||
* unpin it.
|
||||
*/
|
||||
err = btrfs_log_inode(trans, root, other_inode,
|
||||
LOG_INODE_EXISTS,
|
||||
LOG_OTHER_INODE,
|
||||
0, LLONG_MAX, ctx);
|
||||
iput(other_inode);
|
||||
if (err)
|
||||
|
|
27
fs/dcache.c
27
fs/dcache.c
|
@ -277,6 +277,33 @@ static inline int dname_external(const struct dentry *dentry)
|
|||
return dentry->d_name.name != dentry->d_iname;
|
||||
}
|
||||
|
||||
void take_dentry_name_snapshot(struct name_snapshot *name, struct dentry *dentry)
|
||||
{
|
||||
spin_lock(&dentry->d_lock);
|
||||
if (unlikely(dname_external(dentry))) {
|
||||
struct external_name *p = external_name(dentry);
|
||||
atomic_inc(&p->u.count);
|
||||
spin_unlock(&dentry->d_lock);
|
||||
name->name = p->name;
|
||||
} else {
|
||||
memcpy(name->inline_name, dentry->d_iname, DNAME_INLINE_LEN);
|
||||
spin_unlock(&dentry->d_lock);
|
||||
name->name = name->inline_name;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(take_dentry_name_snapshot);
|
||||
|
||||
void release_dentry_name_snapshot(struct name_snapshot *name)
|
||||
{
|
||||
if (unlikely(name->name != name->inline_name)) {
|
||||
struct external_name *p;
|
||||
p = container_of(name->name, struct external_name, name[0]);
|
||||
if (unlikely(atomic_dec_and_test(&p->u.count)))
|
||||
kfree_rcu(p, u.head);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(release_dentry_name_snapshot);
|
||||
|
||||
static inline void __d_set_inode_and_type(struct dentry *dentry,
|
||||
struct inode *inode,
|
||||
unsigned type_flags)
|
||||
|
|
|
@ -730,7 +730,7 @@ struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
|
|||
{
|
||||
int error;
|
||||
struct dentry *dentry = NULL, *trap;
|
||||
const char *old_name;
|
||||
struct name_snapshot old_name;
|
||||
|
||||
trap = lock_rename(new_dir, old_dir);
|
||||
/* Source or destination directories don't exist? */
|
||||
|
@ -745,19 +745,19 @@ struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
|
|||
if (IS_ERR(dentry) || dentry == trap || d_really_is_positive(dentry))
|
||||
goto exit;
|
||||
|
||||
old_name = fsnotify_oldname_init(old_dentry->d_name.name);
|
||||
take_dentry_name_snapshot(&old_name, old_dentry);
|
||||
|
||||
error = simple_rename(d_inode(old_dir), old_dentry, d_inode(new_dir),
|
||||
dentry, 0);
|
||||
if (error) {
|
||||
fsnotify_oldname_free(old_name);
|
||||
release_dentry_name_snapshot(&old_name);
|
||||
goto exit;
|
||||
}
|
||||
d_move(old_dentry, dentry);
|
||||
fsnotify_move(d_inode(old_dir), d_inode(new_dir), old_name,
|
||||
fsnotify_move(d_inode(old_dir), d_inode(new_dir), old_name.name,
|
||||
d_is_dir(old_dentry),
|
||||
NULL, old_dentry);
|
||||
fsnotify_oldname_free(old_name);
|
||||
release_dentry_name_snapshot(&old_name);
|
||||
unlock_rename(new_dir, old_dir);
|
||||
dput(dentry);
|
||||
return old_dentry;
|
||||
|
|
15
fs/jfs/acl.c
15
fs/jfs/acl.c
|
@ -77,13 +77,6 @@ static int __jfs_set_acl(tid_t tid, struct inode *inode, int type,
|
|||
switch (type) {
|
||||
case ACL_TYPE_ACCESS:
|
||||
ea_name = XATTR_NAME_POSIX_ACL_ACCESS;
|
||||
if (acl) {
|
||||
rc = posix_acl_update_mode(inode, &inode->i_mode, &acl);
|
||||
if (rc)
|
||||
return rc;
|
||||
inode->i_ctime = current_time(inode);
|
||||
mark_inode_dirty(inode);
|
||||
}
|
||||
break;
|
||||
case ACL_TYPE_DEFAULT:
|
||||
ea_name = XATTR_NAME_POSIX_ACL_DEFAULT;
|
||||
|
@ -118,9 +111,17 @@ int jfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
|
|||
|
||||
tid = txBegin(inode->i_sb, 0);
|
||||
mutex_lock(&JFS_IP(inode)->commit_mutex);
|
||||
if (type == ACL_TYPE_ACCESS && acl) {
|
||||
rc = posix_acl_update_mode(inode, &inode->i_mode, &acl);
|
||||
if (rc)
|
||||
goto end_tx;
|
||||
inode->i_ctime = current_time(inode);
|
||||
mark_inode_dirty(inode);
|
||||
}
|
||||
rc = __jfs_set_acl(tid, inode, type, acl);
|
||||
if (!rc)
|
||||
rc = txCommit(tid, 1, &inode, 0);
|
||||
end_tx:
|
||||
txEnd(tid);
|
||||
mutex_unlock(&JFS_IP(inode)->commit_mutex);
|
||||
return rc;
|
||||
|
|
|
@ -4404,11 +4404,11 @@ int vfs_rename2(struct vfsmount *mnt,
|
|||
{
|
||||
int error;
|
||||
bool is_dir = d_is_dir(old_dentry);
|
||||
const unsigned char *old_name;
|
||||
struct inode *source = old_dentry->d_inode;
|
||||
struct inode *target = new_dentry->d_inode;
|
||||
bool new_is_dir = false;
|
||||
unsigned max_links = new_dir->i_sb->s_max_links;
|
||||
struct name_snapshot old_name;
|
||||
|
||||
/*
|
||||
* Check source == target.
|
||||
|
@ -4459,7 +4459,7 @@ int vfs_rename2(struct vfsmount *mnt,
|
|||
if (error)
|
||||
return error;
|
||||
|
||||
old_name = fsnotify_oldname_init(old_dentry->d_name.name);
|
||||
take_dentry_name_snapshot(&old_name, old_dentry);
|
||||
dget(new_dentry);
|
||||
if (!is_dir || (flags & RENAME_EXCHANGE))
|
||||
lock_two_nondirectories(source, target);
|
||||
|
@ -4514,14 +4514,14 @@ out:
|
|||
inode_unlock(target);
|
||||
dput(new_dentry);
|
||||
if (!error) {
|
||||
fsnotify_move(old_dir, new_dir, old_name, is_dir,
|
||||
fsnotify_move(old_dir, new_dir, old_name.name, is_dir,
|
||||
!(flags & RENAME_EXCHANGE) ? target : NULL, old_dentry);
|
||||
if (flags & RENAME_EXCHANGE) {
|
||||
fsnotify_move(new_dir, old_dir, old_dentry->d_name.name,
|
||||
new_is_dir, NULL, new_dentry);
|
||||
}
|
||||
}
|
||||
fsnotify_oldname_free(old_name);
|
||||
release_dentry_name_snapshot(&old_name);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
|
|
@ -757,7 +757,7 @@ do_setlk(struct file *filp, int cmd, struct file_lock *fl, int is_local)
|
|||
*/
|
||||
nfs_sync_mapping(filp->f_mapping);
|
||||
if (!NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
|
||||
nfs_zap_mapping(inode, filp->f_mapping);
|
||||
nfs_zap_caches(inode);
|
||||
out:
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -6419,7 +6419,7 @@ nfs4_retry_setlk(struct nfs4_state *state, int cmd, struct file_lock *request)
|
|||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
spin_unlock_irqrestore(&q->lock, flags);
|
||||
|
||||
freezable_schedule_timeout_interruptible(NFS4_LOCK_MAXTIMEOUT);
|
||||
freezable_schedule_timeout(NFS4_LOCK_MAXTIMEOUT);
|
||||
}
|
||||
|
||||
finish_wait(q, &wait);
|
||||
|
|
|
@ -104,16 +104,20 @@ int __fsnotify_parent(struct path *path, struct dentry *dentry, __u32 mask)
|
|||
if (unlikely(!fsnotify_inode_watches_children(p_inode)))
|
||||
__fsnotify_update_child_dentry_flags(p_inode);
|
||||
else if (p_inode->i_fsnotify_mask & mask) {
|
||||
struct name_snapshot name;
|
||||
|
||||
/* we are notifying a parent so come up with the new mask which
|
||||
* specifies these are events which came from a child. */
|
||||
mask |= FS_EVENT_ON_CHILD;
|
||||
|
||||
take_dentry_name_snapshot(&name, dentry);
|
||||
if (path)
|
||||
ret = fsnotify(p_inode, mask, path, FSNOTIFY_EVENT_PATH,
|
||||
dentry->d_name.name, 0);
|
||||
name.name, 0);
|
||||
else
|
||||
ret = fsnotify(p_inode, mask, dentry->d_inode, FSNOTIFY_EVENT_INODE,
|
||||
dentry->d_name.name, 0);
|
||||
name.name, 0);
|
||||
release_dentry_name_snapshot(&name);
|
||||
}
|
||||
|
||||
dput(parent);
|
||||
|
|
|
@ -434,7 +434,7 @@ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
|
|||
for (i = 0; i < cxt->max_dump_cnt; i++) {
|
||||
cxt->przs[i] = persistent_ram_new(*paddr, cxt->record_size, 0,
|
||||
&cxt->ecc_info,
|
||||
cxt->memtype);
|
||||
cxt->memtype, 0);
|
||||
if (IS_ERR(cxt->przs[i])) {
|
||||
err = PTR_ERR(cxt->przs[i]);
|
||||
dev_err(dev, "failed to request mem region (0x%zx@0x%llx): %d\n",
|
||||
|
@ -471,7 +471,8 @@ static int ramoops_init_prz(struct device *dev, struct ramoops_context *cxt,
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
*prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info, cxt->memtype);
|
||||
*prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info,
|
||||
cxt->memtype, 0);
|
||||
if (IS_ERR(*prz)) {
|
||||
int err = PTR_ERR(*prz);
|
||||
|
||||
|
|
|
@ -48,16 +48,15 @@ static inline size_t buffer_start(struct persistent_ram_zone *prz)
|
|||
return atomic_read(&prz->buffer->start);
|
||||
}
|
||||
|
||||
static DEFINE_RAW_SPINLOCK(buffer_lock);
|
||||
|
||||
/* increase and wrap the start pointer, returning the old value */
|
||||
static size_t buffer_start_add(struct persistent_ram_zone *prz, size_t a)
|
||||
{
|
||||
int old;
|
||||
int new;
|
||||
unsigned long flags;
|
||||
unsigned long flags = 0;
|
||||
|
||||
raw_spin_lock_irqsave(&buffer_lock, flags);
|
||||
if (!(prz->flags & PRZ_FLAG_NO_LOCK))
|
||||
raw_spin_lock_irqsave(&prz->buffer_lock, flags);
|
||||
|
||||
old = atomic_read(&prz->buffer->start);
|
||||
new = old + a;
|
||||
|
@ -65,7 +64,8 @@ static size_t buffer_start_add(struct persistent_ram_zone *prz, size_t a)
|
|||
new -= prz->buffer_size;
|
||||
atomic_set(&prz->buffer->start, new);
|
||||
|
||||
raw_spin_unlock_irqrestore(&buffer_lock, flags);
|
||||
if (!(prz->flags & PRZ_FLAG_NO_LOCK))
|
||||
raw_spin_unlock_irqrestore(&prz->buffer_lock, flags);
|
||||
|
||||
return old;
|
||||
}
|
||||
|
@ -75,9 +75,10 @@ static void buffer_size_add(struct persistent_ram_zone *prz, size_t a)
|
|||
{
|
||||
size_t old;
|
||||
size_t new;
|
||||
unsigned long flags;
|
||||
unsigned long flags = 0;
|
||||
|
||||
raw_spin_lock_irqsave(&buffer_lock, flags);
|
||||
if (!(prz->flags & PRZ_FLAG_NO_LOCK))
|
||||
raw_spin_lock_irqsave(&prz->buffer_lock, flags);
|
||||
|
||||
old = atomic_read(&prz->buffer->size);
|
||||
if (old == prz->buffer_size)
|
||||
|
@ -89,7 +90,8 @@ static void buffer_size_add(struct persistent_ram_zone *prz, size_t a)
|
|||
atomic_set(&prz->buffer->size, new);
|
||||
|
||||
exit:
|
||||
raw_spin_unlock_irqrestore(&buffer_lock, flags);
|
||||
if (!(prz->flags & PRZ_FLAG_NO_LOCK))
|
||||
raw_spin_unlock_irqrestore(&prz->buffer_lock, flags);
|
||||
}
|
||||
|
||||
static void notrace persistent_ram_encode_rs8(struct persistent_ram_zone *prz,
|
||||
|
@ -491,6 +493,7 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig,
|
|||
prz->buffer->sig);
|
||||
}
|
||||
|
||||
/* Rewind missing or invalid memory area. */
|
||||
prz->buffer->sig = sig;
|
||||
persistent_ram_zap(prz);
|
||||
|
||||
|
@ -517,7 +520,7 @@ void persistent_ram_free(struct persistent_ram_zone *prz)
|
|||
|
||||
struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
|
||||
u32 sig, struct persistent_ram_ecc_info *ecc_info,
|
||||
unsigned int memtype)
|
||||
unsigned int memtype, u32 flags)
|
||||
{
|
||||
struct persistent_ram_zone *prz;
|
||||
int ret = -ENOMEM;
|
||||
|
@ -528,6 +531,10 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
|
|||
goto err;
|
||||
}
|
||||
|
||||
/* Initialize general buffer state. */
|
||||
raw_spin_lock_init(&prz->buffer_lock);
|
||||
prz->flags = flags;
|
||||
|
||||
ret = persistent_ram_buffer_map(start, size, prz, memtype);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
|
|
@ -591,5 +591,11 @@ static inline struct inode *d_real_inode(const struct dentry *dentry)
|
|||
return d_backing_inode(d_real((struct dentry *) dentry, NULL, 0));
|
||||
}
|
||||
|
||||
struct name_snapshot {
|
||||
const char *name;
|
||||
char inline_name[DNAME_INLINE_LEN];
|
||||
};
|
||||
void take_dentry_name_snapshot(struct name_snapshot *, struct dentry *);
|
||||
void release_dentry_name_snapshot(struct name_snapshot *);
|
||||
|
||||
#endif /* __LINUX_DCACHE_H */
|
||||
|
|
|
@ -293,35 +293,4 @@ static inline void fsnotify_change(struct dentry *dentry, unsigned int ia_valid)
|
|||
}
|
||||
}
|
||||
|
||||
#if defined(CONFIG_FSNOTIFY) /* notify helpers */
|
||||
|
||||
/*
|
||||
* fsnotify_oldname_init - save off the old filename before we change it
|
||||
*/
|
||||
static inline const unsigned char *fsnotify_oldname_init(const unsigned char *name)
|
||||
{
|
||||
return kstrdup(name, GFP_KERNEL);
|
||||
}
|
||||
|
||||
/*
|
||||
* fsnotify_oldname_free - free the name we got from fsnotify_oldname_init
|
||||
*/
|
||||
static inline void fsnotify_oldname_free(const unsigned char *old_name)
|
||||
{
|
||||
kfree(old_name);
|
||||
}
|
||||
|
||||
#else /* CONFIG_FSNOTIFY */
|
||||
|
||||
static inline const char *fsnotify_oldname_init(const unsigned char *name)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline void fsnotify_oldname_free(const unsigned char *old_name)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_FSNOTIFY */
|
||||
|
||||
#endif /* _LINUX_FS_NOTIFY_H */
|
||||
|
|
|
@ -1384,6 +1384,8 @@ int set_phv_bit(struct mlx4_dev *dev, u8 port, int new_val);
|
|||
int get_phv_bit(struct mlx4_dev *dev, u8 port, int *phv);
|
||||
int mlx4_get_is_vlan_offload_disabled(struct mlx4_dev *dev, u8 port,
|
||||
bool *vlan_offload_disabled);
|
||||
void mlx4_handle_eth_header_mcast_prio(struct mlx4_net_trans_rule_hw_ctrl *ctrl,
|
||||
struct _rule_hw *eth_header);
|
||||
int mlx4_find_cached_mac(struct mlx4_dev *dev, u8 port, u64 mac, int *idx);
|
||||
int mlx4_find_cached_vlan(struct mlx4_dev *dev, u8 port, u16 vid, int *idx);
|
||||
int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index);
|
||||
|
|
|
@ -799,6 +799,10 @@ int genphy_read_status(struct phy_device *phydev);
|
|||
int genphy_suspend(struct phy_device *phydev);
|
||||
int genphy_resume(struct phy_device *phydev);
|
||||
int genphy_soft_reset(struct phy_device *phydev);
|
||||
static inline int genphy_no_soft_reset(struct phy_device *phydev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
void phy_driver_unregister(struct phy_driver *drv);
|
||||
void phy_drivers_unregister(struct phy_driver *drv, int n);
|
||||
int phy_driver_register(struct phy_driver *new_driver, struct module *owner);
|
||||
|
|
|
@ -24,6 +24,13 @@
|
|||
#include <linux/list.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
* Choose whether access to the RAM zone requires locking or not. If a zone
|
||||
* can be written to from different CPUs like with ftrace for example, then
|
||||
* PRZ_FLAG_NO_LOCK is used. For all other cases, locking is required.
|
||||
*/
|
||||
#define PRZ_FLAG_NO_LOCK BIT(0)
|
||||
|
||||
struct persistent_ram_buffer;
|
||||
struct rs_control;
|
||||
|
||||
|
@ -40,6 +47,8 @@ struct persistent_ram_zone {
|
|||
void *vaddr;
|
||||
struct persistent_ram_buffer *buffer;
|
||||
size_t buffer_size;
|
||||
u32 flags;
|
||||
raw_spinlock_t buffer_lock;
|
||||
|
||||
/* ECC correction */
|
||||
char *par_buffer;
|
||||
|
@ -55,7 +64,7 @@ struct persistent_ram_zone {
|
|||
|
||||
struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
|
||||
u32 sig, struct persistent_ram_ecc_info *ecc_info,
|
||||
unsigned int memtype);
|
||||
unsigned int memtype, u32 flags);
|
||||
void persistent_ram_free(struct persistent_ram_zone *prz);
|
||||
void persistent_ram_zap(struct persistent_ram_zone *prz);
|
||||
|
||||
|
|
38
kernel/cpu.c
38
kernel/cpu.c
|
@ -410,11 +410,26 @@ static int notify_online(unsigned int cpu)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void __cpuhp_kick_ap_work(struct cpuhp_cpu_state *st);
|
||||
|
||||
static int bringup_wait_for_ap(unsigned int cpu)
|
||||
{
|
||||
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
|
||||
|
||||
/* Wait for the CPU to reach CPUHP_AP_ONLINE_IDLE */
|
||||
wait_for_completion(&st->done);
|
||||
if (WARN_ON_ONCE((!cpu_online(cpu))))
|
||||
return -ECANCELED;
|
||||
|
||||
/* Unpark the stopper thread and the hotplug thread of the target cpu */
|
||||
stop_machine_unpark(cpu);
|
||||
kthread_unpark(st->thread);
|
||||
|
||||
/* Should we go further up ? */
|
||||
if (st->target > CPUHP_AP_ONLINE_IDLE) {
|
||||
__cpuhp_kick_ap_work(st);
|
||||
wait_for_completion(&st->done);
|
||||
}
|
||||
return st->result;
|
||||
}
|
||||
|
||||
|
@ -437,9 +452,7 @@ static int bringup_cpu(unsigned int cpu)
|
|||
cpu_notify(CPU_UP_CANCELED, cpu);
|
||||
return ret;
|
||||
}
|
||||
ret = bringup_wait_for_ap(cpu);
|
||||
BUG_ON(!cpu_online(cpu));
|
||||
return ret;
|
||||
return bringup_wait_for_ap(cpu);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -974,31 +987,20 @@ void notify_cpu_starting(unsigned int cpu)
|
|||
}
|
||||
|
||||
/*
|
||||
* Called from the idle task. We need to set active here, so we can kick off
|
||||
* the stopper thread and unpark the smpboot threads. If the target state is
|
||||
* beyond CPUHP_AP_ONLINE_IDLE we kick cpuhp thread and let it bring up the
|
||||
* cpu further.
|
||||
* Called from the idle task. Wake up the controlling task which brings the
|
||||
* stopper and the hotplug thread of the upcoming CPU up and then delegates
|
||||
* the rest of the online bringup to the hotplug thread.
|
||||
*/
|
||||
void cpuhp_online_idle(enum cpuhp_state state)
|
||||
{
|
||||
struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state);
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
/* Happens for the boot cpu */
|
||||
if (state != CPUHP_AP_ONLINE_IDLE)
|
||||
return;
|
||||
|
||||
st->state = CPUHP_AP_ONLINE_IDLE;
|
||||
|
||||
/* Unpark the stopper thread and the hotplug thread of this cpu */
|
||||
stop_machine_unpark(cpu);
|
||||
kthread_unpark(st->thread);
|
||||
|
||||
/* Should we go further up ? */
|
||||
if (st->target > CPUHP_AP_ONLINE_IDLE)
|
||||
__cpuhp_kick_ap_work(st);
|
||||
else
|
||||
complete(&st->done);
|
||||
complete(&st->done);
|
||||
}
|
||||
|
||||
/* Requires cpu_add_remove_lock to be held */
|
||||
|
|
|
@ -8655,11 +8655,20 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
|
|||
if (IS_ERR(tg))
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
sched_online_group(tg, parent);
|
||||
|
||||
return &tg->css;
|
||||
}
|
||||
|
||||
/* Expose task group only after completing cgroup initialization */
|
||||
static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
|
||||
{
|
||||
struct task_group *tg = css_tg(css);
|
||||
struct task_group *parent = css_tg(css->parent);
|
||||
|
||||
if (parent)
|
||||
sched_online_group(tg, parent);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
|
||||
{
|
||||
struct task_group *tg = css_tg(css);
|
||||
|
@ -9062,6 +9071,7 @@ static struct cftype cpu_files[] = {
|
|||
|
||||
struct cgroup_subsys cpu_cgrp_subsys = {
|
||||
.css_alloc = cpu_cgroup_css_alloc,
|
||||
.css_online = cpu_cgroup_css_online,
|
||||
.css_released = cpu_cgroup_css_released,
|
||||
.css_free = cpu_cgroup_css_free,
|
||||
.fork = cpu_cgroup_fork,
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue