4102 space_maps should store more information about themselves
4103 space map object blocksize should be increased
4104 ::spa_space no longer works
4105 removing a mirrored log device results in a leaked object
4106 asynchronously load metaslab
Revision illumos/illumos-gate@0713e232b7
larger than the operational region. If the op region size is zero,
clipping would create a zero-sized map entry. The result is that vm
map splay starts behaving inconsistently, sometimes returning
zero-sized entry, sometimes the next (or previous) entry.
One step further, it could result in e.g. vm_map_wire() setting
MAP_ENTRY_IN_TRANSITION on the zero-sized entry, but failing to clear
it in the done part. The vm_map_delete() than hangs forever waiting
for the flag removal.
Verify for zero-length requests and act as if it is always successfull
without performing any action on the address space.
Diagnosed by: pho
Tested by: pho (previous version)
Reviewed by: alc (previous version)
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
is chunked to pieces limited by integer io_hold_cnt tunable, while
vm_fault_quick_hold_pages() takes integer max_count as the upper bound.
Rearrange the checks to correctly handle overflowing address arithmetic.
Submitted by: bde
Tested by: pho
Discussed with: alc
MFC after: 1 week
On some architectures (powerpc), char is unsigned by default, which means
comparisons against -1 always fail, so the programs get stuck in an
infinite loop.
MFC after: 1 week
When entering a mapping via pmap_enter() unmanaged pages ought to be
naturally excluded from the "modified" and "referenced" emulation.
RW permission should be granted implicitly when requested,
otherwise unmanaged page will not recover from the permission fault
since there will be no PV entry to indicate that the page can be written.
In addition, only managed pages that participate in "modified"
emulation need to be marked as "dirty" and "writeable" when entered
with RW permissions. Likewise with "referenced" flag for managed pages.
Unmanaged ones however should not be marked as such.
Reviewed by: cognet, gber
When emulating modified bit the executable attribute was cleared by
mistake when calling pmap_set_prot(). This was not a problem before
changes to ref/mod emulation since all the pages were created RW basing
on the "prot" argument in pmap_enter(). Now however not all pages are RW
and the RW permission can be cleared in the process.
Added proper KTRs accordingly.
Spotted by: cognet
Reviewed by: gber
Now it is easy to expand the size of the mirror when all its components
are replaced. Also add g_resize method to geom_mirror class. It will write
updated metadata to new last sector, when parent provider is resized.
Silence from: geom@
MFC after: 1 month
host.host_ocr, examine the correct field when setting up the hardware. Also,
the offset for the capabilties register should be 0x140, not 0x240.
Submitted by: Ilya Bakulin <ilya@bakulin.de>
Pointy hat to: me
This is a fix for a regression introduced in r246293.
vm_page_clear_dirty expects the range to have DEV_BSIZE aligned boundaries,
otherwise it extends them. Thus it can happen that the whole page is
marked clean while actually having some small dirty region(s).
This commit makes the range properly aligned and ensures that only
the clean data is marked as such.
It would interesting to evaluate how much benefit clearing with DEV_BSIZE
granularity produces. Perhaps instead we should clear the whole page
when it is completely overwritten and don't bother clearing any bits
if only a portion a page is written.
Reported by: George Hartzell <hartzell@alerce.com>,
Richard Todd <rmtodd@servalan.servalan.com>
Tested by: George Hartzell <hartzell@alerce.com>,
Reviewed by: kib
MFC after: 5 days
This call should be a sufficiently close approximation of what happens
when a filesystem is unmounted and remounted. To be more specific, it
should test that the data that was in the page cache is the same data
that ends up on a stable storage or in a filesystem's internal cache,
if any.
This will catch the cases where a page with modified data is marked as
a clean page for whatever reason.
While there, make logging of the special events (open+close before
plus invalidation now) more generic and slightly better than the previous
hack.
MFC after: 10 days
This option should be useful for testing if a filesystem uses the
unified buffer / page cache.
Or, if filesystem's emulation of the unified cache works as expected.
This should be the case for e.g. ZFS.
MFC after: 1 week
CaptureTracking: Plug a loophole in the "too many uses" heuristic.
The heuristic was added to avoid spending too much compile time in a
specially crafted test case (PR17461, PR16474) with many uses on a
select or bitcast instruction can still trigger the slow case. Add a
check for that case.
This only affects compile time, don't have a good way to test it.
This fixes the excessive compile time spent on a specific file of the
graphics/rawtherapee port.
Reported by: mandree
MFC after: 3 days
SSL_set_tlsext_host_name(3) internally does not modify the host buffer
pased to it. So it is safe to DECONST the struct url* here.
Reported by: gjb
Approved by: bapt (implicit)
MFC after: 1 week
X-MFC-With: r258347
SNI is Server Name Indentification which is a protocol for TLS that
indicates the host that is being connected to at the start of the
handshake. It allows to use Virtual Hosts on HTTPS.
Submitted by: sbz
Submitted by: Michael Gmelin <freebsd@grem.de> [1]
PR: kern/183583 [1]
Reviewed by: des
Approved by: bapt
MFC after: 1 week
On machines with seveal CPUs and enough RAM this can easily twice improve
ZFS performance or twice reduce CPU usage. It was disabled three years
ago due to memory and KVA exhaustion reports, but our VM subsystem got
improved a lot since that time, hopefully enough to make another try.
This is a last resort for very low memory condition in case other measures
to free memory were ineffective. Sequentially cycle through all CPUs and
extract per-CPU cache buckets into zone cache from where they can be freed.
Lock congestion is the same, whether it happens on alloc or free, so
handle it equally. Now that we have back pressure, there is no problem
to grow buckets a bit faster. Any way growth is much slower then in 9.x.
These new buckets make bucket size self-tuning more soft and precise.
Without them there are buckets for 1, 5, 13, 29, ... items. While at
bigger sizes difference about 2x is fine, at smallest ones it is 5x and
2.6x respectively. New buckets make that line look like 1, 3, 5, 9, 13,
29, reducing jumps between steps, making algorithm work softer, allocating
and freeing memory in better fitting chunks. Otherwise there is quite a
big gap between allocating 128K and 5x128K of RAM at once.
Every time system detects low memory condition decrease bucket sizes for
each zone by one item. As result, higher memory pressure will push to
smaller bucket sizes and so smaller per-CPU caches and so more efficient
memory use.
Before this change there was no force to oppose buckets growth as result
of practically inevitable zone lock conflicts, and after some run time
per-CPU caches could consume enough RAM to kill the system.
adapters. Both devices support Gigabit Ethernet and USB 2.0, and the AX88179
supports USB 3.0. The driver was written by kevlo@ and lwhsu@, with a few
bug fixes from me.
MFC after: 2 months