IF_ADDR_UNLOCK() across network device drivers when accessing the
per-interface multicast address list, if_multiaddrs. This will
allow us to change the locking strategy without affecting our driver
programming interface or binary interface.
For two wireless drivers, remove unnecessary locking, since they
don't actually access the multicast address list.
Approved by: re (kib)
MFC after: 6 weeks
required by video card drivers. Specifically, this change introduces
vm_cache_mode_t with an appropriate VM_CACHE_DEFAULT definition on all
architectures. In addition, this changes adds a vm_cache_mode_t parameter
to kmem_alloc_contig() and vm_phys_alloc_contig(). These will be the
interfaces for allocating mapped kernel memory and physical memory,
respectively, with non-default cache modes.
In collaboration with: jhb
- Modules and kernel code alike may use DPCPU_DEFINE(),
DPCPU_GET(), DPCPU_SET(), etc. akin to the statically defined
PCPU_*. Requires only one extra instruction more than PCPU_* and is
virtually the same as __thread for builtin and much faster for shared
objects. DPCPU variables can be initialized when defined.
- Modules are supported by relocating the module's per-cpu linker set
over space reserved in the kernel. Modules may fail to load if there
is insufficient space available.
- Track space available for modules with a one-off extent allocator.
Free may block for memory to allocate space for an extent.
Reviewed by: jhb, rwatson, kan, sam, grehan, marius, marcel, stas
driver's i/o ops must be locked to avoid chaos. Extend the cambria
bus tag to support ata and add a spin lock. The ata driver is
hacked to use that instead of it's builtin hack for ixp425. Once
the ata driver is fixed to not be confused about byte order we can
generalize the cambria bus tag code and make it generally useful.
While here take advantage of our being ixp435-specific to remove
delays when switching between byte+word accesses and to eliminate
the 2us delay for the uarts (the spin lock overhead looks to do
this for us).
used for the optional GPS+RS485 uarts on the Gateworks Cambria boards
which otherwise are unreliable
o setup the hack bus space tag for the GPS+RS485 uarts
o program the gpio interrupts for the uarts to be edge-rising
o force timing on the expansion bus for the uarts to be "slow"
Thanks to Chris Lang of Gateworks for these tips.
structure. When the page is shared, the kernel mapping becomes a special
type of managed page to force the cache off the page mappings. This is
needed to avoid stale entries on all ARM VIVT caches, and VIPT caches
with cache color issue.
Submitted by: Mark Tinguely
Reviewed by: alc
Tested by: Grzegorz Bernacki, thompsa
causes both to become inoperative; this apparently was done by the original
IAL code as a workaround for IMEM parity errors which we've not seen so
just disable the reset.
Note this problem does not occur on IXP425 boards. The linux driver does
fuse-resets on each NPE but in the order NPE-A < NPE-B < NPE-C (when probing
for which NPE's are present/operational); we may want to switch to a similar
scheme but for now disable the resets until we see an issue.
normally taken from the hints file so this should have no effect
o set the port address "just in case"
o add NPE-A support to the tx done qmgr callback
so that it isn't exposured unless needed. In particular this means
that it's easier to tune the memory layout based on board details.
While here, remove inclusion of <machine/intr.h> from mvreg.h. This
also contains exposure to SoC specifics in MI drivers, because NIRQ
depends on the SoC.
the implementation can guarantee forward progress in the event of
a stuck interrupt or interrupt storm. This is especially critical
for fast interrupt handlers, as they can cause a hard hang in that
case. When first called, arm_get_next_irq() is passed -1.
Obtained from: Juniper Networks, Inc.
When pages are removed from virtual address space by calling pmap_remove_all()
CPU caches were not invalidated, which led to read corruption when another
page got mapped at this same virtual address at later time (the CPU was
retrieving stale contents).
Submitted by: Piotr Ziecik
Obtained from: Semihalf
CPU for too long period than necessary. Additively, interfaces are kept
polled (in the tick) even if no more packets are available.
In order to avoid such situations a new generic mechanism can be
implemented in proactive way, keeping track of the time spent on any
packet and fragmenting the time for any tick, stopping the processing
as soon as possible.
In order to implement such mechanism, the polling handler needs to
change, returning the number of packets processed.
While the intended logic is not part of this patch, the polling KPI is
broken by this commit, adding an int return value and the new flag
IFCAP_POLLING_NOCOUNT (which will signal that the return value is
meaningless for the installed handler and checking should be skipped).
Bump __FreeBSD_version in order to signal such situation.
Reviewed by: emaste
Sponsored by: Sandvine Incorporated
The system hostname is now stored in prison0, and the global variable
"hostname" has been removed, as has the hostname_mtx mutex. Jails may
have their own host information, or they may inherit it from the
parent/system. The proper way to read the hostname is via
getcredhostname(), which will copy either the hostname associated with
the passed cred, or the system hostname if you pass NULL. The system
hostname can still be accessed directly (and without locking) at
prison0.pr_host, but that should be avoided where possible.
The "similar information" referred to is domainname, hostid, and
hostuuid, which have also become prison parameters and had their
associated global variables removed.
Approved by: bz (mentor)
possible future I-cache coherency operation can succeed. On ARM
for example the L1 cache can be (is) virtually mapped, which
means that any I/O that uses temporary mappings will not see the
I-cache made coherent. On ia64 a similar behaviour has been
observed. By flushing the D-cache, execution of binaries backed
by md(4) and/or NFS work reliably.
For Book-E (powerpc), execution over NFS exhibits SIGILL once in
a while as well, though cpu_flush_dcache() hasn't been implemented
yet.
Doing an explicit D-cache flush as part of the non-DMA based I/O
read operation eliminates the need to do it as part of the
I-cache coherency operation itself and as such avoids pessimizing
the DMA-based I/O read operations for which D-cache are already
flushed/invalidated. It also allows future optimizations whereby
the bcopy() followed by the D-cache flush can be integrated in a
single operation, which could be implemented using on-chips DMA
engines, by-passing the D-cache altogether.