1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-23 11:18:54 +00:00
Commit Graph

23 Commits

Author SHA1 Message Date
Alan Cox
c19aa3402b Increase UMA_BOOT_PAGES because of changes to pv entry initialization in
revision 1.457 of i386/i386/pmap.c.
2004-01-18 05:51:06 +00:00
Alan Cox
925692caa5 - Significantly reduce the number of preallocated pv entries in
pmap_init().  Such a large preallocation is unnecessary and wastes
   nearly eight megabytes of kernel virtual address space per gigabyte
   of managed physical memory.
 - Increase UMA_BOOT_PAGES by two.  This enables the removal of
   pmap_pv_allocf().  (Note: this function was only used during
   initialization, specifically, after pmap_init() but before
   pmap_init2().  During pmap_init2(), a new allocator is installed.)
2003-12-22 01:01:32 +00:00
Jeff Roberson
9643769a3a - Remove the working-set algorithm. Instead, use the per cpu buckets as the
working set cache.  This has several advantages.  Firstly, we never touch
   the per cpu queues now in the timeout handler.  This removes one more
   reason for having per cpu locks.  Secondly, it reduces the size of the zone
   by 8 bytes, bringing it under 200 bytes for a single proc x86 box.  This
   tidies up other logic as well.
 - The 'destroy' flag no longer needs to be passed to zone_drain() since it
   always frees everything in the zone's slabs.
 - cache_drain() is now only called from zone_dtor() and so it destroys by
   default.  It also does not need the destroy parameter now.
2003-09-19 23:27:46 +00:00
Jeff Roberson
3e0cab95c0 - Remove the cache colorization code. We can't use it due to all of the
broken consumers of the malloc interface who assume that the allocated
   address will be an even multiple of the size.
 - Remove disabled time delay code on uma_reclaim().  The comment there said
   it all.  It was not an effective strategy and it should not be left in
   #if 0'd for all eternity.
2003-09-19 23:04:44 +00:00
Jeff Roberson
b60f5b794e - Fix the silly flag situation in UMA. Remove redundant ZFLAG/ZONE flags
by accepting the user supplied flags directly.  Previously this was not
   done so that flags for the same field would not be defined in two
   different files.  Add comments in each header instructing future
   developers on how now to shoot their feet.
 - Fix a test for !OFFPAGE which should have been a test for HASH.  This would
   have caused a panic if we had ever destructed a malloc zone.  This also
   opens up the possibility that other zones could use the vsetobj() method
   rather than a hash.
2003-09-19 08:37:44 +00:00
Jeff Roberson
cae33c1429 - Initialize a pool of bucket zones so that we waste less space on zones that
don't cache as many items.
 - Introduce the bucket_alloc(), bucket_free() functions to wrap bucket
   allocation.  These functions select the appropriate bucket zone to
   allocate from or free to.
 - Rename ub_ptr to ub_cnt to reflect a change in its use.  ub_cnt now reflects
   the count of free items in the bucket.  This gets rid of many unnatural
   subtractions by 1 throughout the code.
 - Add ub_entries which reflects the number of entries possibly held in a
   bucket.
2003-09-19 06:26:45 +00:00
Bosko Milekic
20e8e865bd - When deciding whether to init the zone with small_init or large_init,
compare the zone element size (+1 for the byte of linkage) against
  UMA_SLAB_SIZE - sizeof(struct uma_slab), and not just UMA_SLAB_SIZE.
  Add a KASSERT in zone_small_init to make sure that the computed
  ipers (items per slab) for the zone is not zero, despite the addition
  of the check, just to be sure (this part submitted by: silby)

- UMA_ZONE_VM used to imply BUCKETCACHE.  Now it implies
  CACHEONLY instead.  CACHEONLY is like BUCKETCACHE in the
  case of bucket allocations, but in addition to that also ensures that
  we don't setup the zone with OFFPAGE slab headers allocated from the
  slabzone.  This means that we're not allowed to have a UMA_ZONE_VM
  zone initialized for large items (zone_large_init) because it would
  require the slab headers to be allocated from slabzone, and hence
  kmem_map.  Some of the zones init'd with UMA_ZONE_VM are so init'd
  before kmem_map is suballoc'd from kernel_map, which is why this
  change is necessary.
2003-08-11 19:39:45 +00:00
Jeff Roberson
f828e5bedb - Get rid of the ill-conceived uz_cachefree member of uma_zone.
- In sysctl_vm_zone use the per cpu locks to read the current cache
   statistics this makes them more accurate while under heavy load.

Submitted by:	tegge
2003-07-30 05:59:17 +00:00
Bosko Milekic
d88797c2ba Move the pcpu lock out of the uma_cache and instead have a single set
of pcpu locks.  This makes uma_zone somewhat smaller (by (LOCKNAME_LEN *
sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact).

No Objections from jeff.
2003-06-25 20:49:48 +00:00
Poul-Henning Kamp
c5d771b807 Prepend _ to internal union members to avoid ambiguity.
Found by:       FlexeLint
2003-05-31 19:52:15 +00:00
Jeff Roberson
48eea37508 - Add support for machine dependant page allocation routines. MD code
may define UMA_MD_SMALL_ALLOC to make use of this feature.

Reviewed by:	peter, jake
2002-11-01 01:01:27 +00:00
Jeff Roberson
f461cf2297 - Use my freebsd email alias in the copyright.
- Remove redundant instances of my email alias in the file summary.
2002-09-19 06:05:32 +00:00
Jeff Roberson
99571dc345 - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.
- Remove all instances of the mallochash.
 - Stash the slab pointer in the vm page's object pointer when allocating from
   the kmem_obj.
 - Use the overloaded object pointer to find slabs for malloced memory.
2002-09-18 08:26:30 +00:00
Julian Elischer
e602ba25fd Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
Jeff Roberson
18aa2de5a7 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
Jeff Roberson
28bc44195c Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
Jeff Roberson
af7f9b97b6 Fix the calculation that determines uz_maxpages. It was off for large zones.
Fortunately we have no large zones with maximums specified yet, so it wasn't
breaking anything.

Implement blocking when a zone exceeds the maximum and M_WAITOK is specified.
Previously this just failed like the old zone allocator did.  The old zone
allocator didn't support WAITOK/NOWAIT though so we should do what we
advertise.

While I was in there I cleaned up some more zalloc logic to further simplify
that code path and reduce redundant code.  This was needed to make the blocking
work properly anyway.
2002-04-14 01:56:25 +00:00
Jeff Roberson
1d4cb54ba8 Quiet witness warnings about acquiring several zone locks. In the case that
this happens it is OK.
2002-04-08 21:08:17 +00:00
Jeff Roberson
a553d4b8eb Rework most of the bucket allocation and free code so that per cpu locks are
never held across blocking operations.  Also, fix two other lock order
reversals that were exposed by jhb's witness change.

The free path previously had a bug that would cause it to skip the free bucket
list in some cases and go straight to allocating a new bucket.  This has been
fixed as well.

These changes made the bucket handling code much cleaner and removed quite a
few lock operations.  This should be marginally faster now.

It is now possible to call malloc w/o Giant and avoid any witness warnings.
This still isn't entirely safe though because malloc_type statistics are not
protected by any lock.
2002-04-08 02:42:55 +00:00
Jeff Roberson
c235bfa551 Spelling correction; s/seperate/separate/g
Submitted by:	eric
2002-04-07 22:56:48 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Jeff Roberson
f22a4b62f5 Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag.  Remove the dup_list and dup_ok code from subr_witness.  Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface.  The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by:	jhb
2002-03-27 09:23:41 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00