1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-25 11:37:56 +00:00
Commit Graph

10606 Commits

Author SHA1 Message Date
Peter Wemm
041a991fa7 MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process.  This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly.  The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
  on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls.  Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario.   If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries.  At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB.  A 157MB saving is significant.

Test-run by:  scottl (Thanks!)
2006-04-26 21:49:20 +00:00
Jung-uk Kim
daea0aad84 Check if reported HTT cores are physical cores. This commit does not
affect AMD CPUs at all because HTT bit is disabled earlier.  Intel
multicore CPUs and ULE scheduler may be affected.
2006-04-25 00:06:37 +00:00
Jung-uk Kim
091c9b4961 Add another Intel CPU feature flag, xTPR (Send Task Priority Messages). 2006-04-24 22:56:57 +00:00
Jung-uk Kim
cf24d86bcc Check if deterministic cache parameters leaf is valid before use. 2006-04-24 22:23:52 +00:00
Colin Percival
8b4553119e Adjust dangerous-shared-cache-detection logic from "all shared data
caches are dangerous" to "a shared L1 data cache is dangerous".  This
is a compromise between paranoia and performance: Unlike the L1 cache,
nobody has publicly demonstrated a cryptographic side channel which
exploits the L2 cache -- this is harder due to the larger size, lower
bandwidth, and greater associativity -- and prohibiting shared L2
caches turns Intel Core Duo processors into Intel Core Solo processors.

As before, the 'machdep.hyperthreading_allowed' sysctl will allow even
the L1 data cache to be shared.

Discussed with:	jhb, scottl
Security:	See FreeBSD-SA-05:09.htt for background material.
2006-04-24 21:17:01 +00:00
Xin LI
3b28c0c6f9 Move AHC_REG_PRETTY_PRINT and AHD_REG_PRETTY_PRINT below
their corresponding devices.
2006-04-24 08:44:34 +00:00
Peter Wemm
4503a06eef Merge minidumps from amd64 where they were originally developed.
Major differences:
 * since there is no direct map region, there is no custom uma memory
   allocator to modify to include its pages in the dumps.
 * Various data entries are reduced from 64 bit to 32 bit to match the
   native size.

dump_add_page() and dump_drop_page() are still present in case one wants to
arrange for arbitary pages to be dumped.  This is of marginal use though
because libkvm+kgdb cannot address physical memory that isn't mapped into
kvm.
2006-04-21 04:28:43 +00:00
Warner Losh
99b0e15695 Set the rid of the resource we're about to return to the user. 2006-04-20 04:10:27 +00:00
Colin Percival
2652af563e Correct a local information leakage bug affecting AMD FPUs.
Security:	FreeBSD-SA-06:14.fpu
2006-04-19 07:00:19 +00:00
Mitsuru IWASAKI
858a52f464 Import ACPI Dock Station support. Note that this is still very young.
Additional detach implementaions (or maybe improvement) for other
deivce drivers is required.

Reviewed by:	njl, imp
MFC after:	1 week
2006-04-15 12:31:34 +00:00
Alan Cox
826c207263 Retire pmap_track_modified(). We no longer need it because we do not
create managed mappings within the clean submap.  To prevent regressions,
add assertions blocking the creation of managed mappings within the clean
submap.

Reviewed by: tegge
2006-04-12 04:22:52 +00:00
Paul Saab
d8636a9ab7 Hook bce up to the build 2006-04-10 20:04:22 +00:00
John Baldwin
0f2be07217 - Don't set CR0_NE and CR0_MP in npx_probe() as they are already set
earlier in cpu_setregs().
- If we know this CPU has a FPU via cpuid, then just assume the INT16
  interface and make the npx device quiet to not clutter the dmesg.  This
  is true for all Pentium and later CPUs and even some of the later 486dx
  CPUs.

Reviewed by:	bde
Tested by:	ps
MFC after:	1 week
2006-04-06 17:17:45 +00:00
John Baldwin
907d4d7f45 Cache the value of the lower half of each I/O APIC redirection table entry
so that we only have to do an ioapic_write() instead of an ioapic_read()
followed by an ioapic_write() every time we mask and unmask level triggered
interrupts.  This cuts the execution time for these operations roughly in
half.

Profiled by:	Paolo Pisati <p.pisati@oltrelinux.com>
MFC after:	1 week
2006-04-05 20:43:19 +00:00
Joseph Koshy
64e3ca8f48 Freshen a comment.
Reviewed by:	jhb
2006-04-04 02:26:45 +00:00
Marcel Moolenaar
bfcdefd8aa Eliminate HAVE_STOPPEDPCBS. On ia64 the PCPU holds a pointer to the
PCB in which the context of stopped CPUs is stored. To access this
PCB from KDB, we introduce a new define, called KDB_STOPPEDPCB. The
definition, when present, lives in <machine/kdb.h> and abstracts
where MD code saves the context. Define KDB_STOPPEDPCB on i386,
amd64, alpha and sparc64 in accordance to previous code.
2006-04-03 22:51:47 +00:00
Peter Wemm
b9eee07e36 Remove the unused sva and eva arguments from pmap_remove_pages(). 2006-04-03 21:16:10 +00:00
Alan Cox
9c6a71e4ca Introduce pmap_try_insert_pv_entry(), a function that conditionally creates
a pv entry if the number of entries is below the high water mark for pv
entries.

Use pmap_try_insert_pv_entry() in pmap_copy() instead of
pmap_insert_entry().  This avoids possible recursion on a pmap lock in
get_pv_entry().

Eliminate the explicit low-memory checks in pmap_copy().  The check that
the number of pv entries was below the high water mark was largely
ineffective because it was located in the outer loop rather than the
inner loop where pv entries were allocated.  Instead of checking, we
attempt the allocation and handle the failure.

Reviewed by: tegge
Reported by: kris
MFC after: 5 days
2006-04-02 05:45:05 +00:00
Maksim Yevmenkin
9216fccdd9 Add kbdmux(4) to GENERIC
Requested by:	scottl
2006-03-31 19:03:37 +00:00
Scott Long
7f631a410c Hook the MFI driver up to the build. 2006-03-29 09:57:22 +00:00
Dag-Erling Smørgrav
6f0f8cca25 Use wrapper macros for atomic pointer operations in order to perform the
correct casts.  This should probably be merged to other architectures.
2006-03-28 14:34:48 +00:00
John Baldwin
8283c726e7 If the XSDT address in the RSDP for an ACPI 2.0 machine is NULL, then fall
back to using the RSDT instead.  ACPI-CA already follows this same strategy
as a workaround for yet another instance of brain-damaged BIOS writers.

PR:		i386/93963
Submitted by:	Masayuki FUKUI <fukui.FreeBSD@fanet.net>
2006-03-27 15:59:48 +00:00
Alan Cox
fa8053e9a9 Eliminate unnecessary invalidations of the entire TLB by pmap_remove().
Specifically, on mappings with PG_G set pmap_remove() not only performs
the necessary per-page invlpg invalidations but also performs an
unnecessary invalidation of the entire set of non-PG_G entries.

Reviewed by: tegge
2006-03-21 18:07:42 +00:00
David Xu
39d3e6198d Remove stale KSE code.
Reviewed by: alc
2006-03-21 06:46:27 +00:00
John Baldwin
aef8cd01ed Drop some unneeded casts since we program the kernel in C rather than C++. 2006-03-20 19:39:08 +00:00
Alexander Leidinger
c85625bfe7 regen 2006-03-18 20:49:01 +00:00
Alexander Leidinger
d4a3f5ddb6 Fixup some problems in my previous commit (COMPAT_43).
Pointyhat to:	netchild
2006-03-18 20:47:36 +00:00
Alexander Leidinger
1f7642e058 regen after COMPAT_43 removal 2006-03-18 18:24:38 +00:00
Alexander Leidinger
5c8919adf4 Get rid of the need of COMPAT_43 in the linuxolator.
Submitted by:	Divacky Roman <xdivac02@stud.fit.vutbr.cz>
Obtained from:	DragonFly (some parts)
2006-03-18 18:20:17 +00:00
John Baldwin
39092e79ed Don't allow userland to set hardware watch points on kernel memory at all.
Previously, we tried to allow this only for root.  However, we were calling
suser() on the *target* process rather than the current process.  This
means that if you can ptrace() a process running as root you can set a
hardware watch point in the kernel.  In practice I think you probably have
to be root in order to pass the p_candebug() checks in ptrace() to attach
to a process running as root anyway.  Rather than fix the suser(), I just
axed the entire idea, as I can't think of any good reason _at all_ for
userland to set hardware watch points for KVM.

MFC after:	3 days
Also thinks hardware watch points on KVM from userland are bad:	bde, rwatson
2006-03-14 16:13:55 +00:00
David Xu
90a693f891 It is not necessary to read %gs twice. 2006-03-10 05:55:26 +00:00
David Xu
fc643048fe Fix stack offset to allow gcc's stack aligment code to work correctly.
MFC after: 3 days
2006-03-10 02:54:45 +00:00
John Baldwin
8e8f0765ab Flip the switch and don't route interrupts to hyperthreads in a HT system.
In at least one benchmark this showed around a 20% performance increase.
If other workloads do benefit from having hyperthreads service interrupts,
we can always make this a loader tunable.

MFC after:	3 days
Tested by:	ps
2006-03-09 16:38:52 +00:00
Poul-Henning Kamp
6acae67129 Improve the advantech watchdog. 2006-03-06 07:43:28 +00:00
Yaroslav Tykhiy
375ce6798f Take the functionality contained in the former "options TDFX_LINUX"
into a separate module.  Accordingly, convert the option into a device
named similarly.

Note for MFC: Perhaps the option should stay in RELENG_6 for POLA reasons.

Suggested by:	scottl
Reviewed by:	cokane
MFC after:	5 days
2006-03-03 21:37:38 +00:00
Alexander Leidinger
fb0a379774 - use a more common style to print memory sizes
- add some more cache sizes (2nd and 3rd level) [1]

Submitted by:	HATANOU Tomomi <hatanou@infolab.ne.jp> [1]
PR:		91328 [1]
2006-03-03 18:54:05 +00:00
Rink Springer
5fa7c51ff6 Committed the xbox syscons(8)-able console driver.
Reviewed by:    arch@ (no comments)
Approved by:    imp (mentor)
2006-03-03 14:52:57 +00:00
Scott Long
a7f12baaca iir works on PAE now. 2006-03-03 04:30:18 +00:00
John Baldwin
215e7c161a Rework how we wire up interrupt sources to CPUs:
- Throw out all of the logical APIC ID stuff.  The Intel docs are somewhat
  ambiguous, but it seems that the "flat" cluster model we are currently
  using is only supported on Pentium and P6 family CPUs.  The other
  "hierarchy" cluster model that is supported on all Intel CPUs with
  local APICs is severely underdocumented.  For example, it's not clear
  if the OS needs to glean the topology of the APIC hierarchy from
  somewhere (neither ACPI nor MP Table include it) and setup the logical
  clusters based on the physical hierarchy or not.  Not only that, but on
  certain Intel chipsets, even though there were 4 CPUs in a logical
  cluster, all the interrupts were only sent to one CPU anyway.
- We now bind interrupts to individual CPUs using physical addressing via
  the local APIC IDs.  This code has also moved out of the ioapic PIC
  driver and into the common interrupt source code so that it can be
  shared with MSI interrupt sources since MSI is addressed to APICs the
  same way that I/O APIC pins are.
- Interrupt source classes grow a new method pic_assign_cpu() to bind an
  interrupt source to a specific local APIC ID.
- The SMP code now tells the interrupt code which CPUs are avaiable to
  handle interrupts in a simpler and more intuitive manner.  For one thing,
  it means we could now choose to not route interrupts to HT cores if we
  wanted to (this code is currently in place in fact, but under an #if 0
  for now).
- For now we simply do static round-robin of IRQs to CPUs when the first
  interrupt handler just as before, with the change that IRQs are now
  bound to individual CPUs rather than groups of up to 4 CPUs.
- Because the IRQ to CPU mapping has now been moved up a layer, it would
  be easier to manage this mapping from higher levels.  For example, we
  could allow drivers to specify a CPU affinity map for their interrupts,
  or we could allow a userland tool to bind IRQs to specific CPUs.

The MFC is tentative, but I want to see if this fixes problems some folks
had with UP APIC kernels on 6.0 on SMP machines (an SMP kernel would work
fine, but a UP APIC kernel (such as GENERIC in RELENG_6) would lose
interrupts).

MFC after:	1 week
2006-02-28 22:24:55 +00:00
Colin Percival
69084095dc Add frequency-voltage tables for Intel 778, 758, 773, 753, and 733J
processors.

Obtained from:	Intel Datasheet 302189-008
2006-02-25 04:55:38 +00:00
Sam Leffler
3f676959ae guard function decls with _KERNEL so user code can include this file
MFC after:	1 week
2006-02-22 21:38:33 +00:00
John Baldwin
06ad42b2f7 Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
  stop event earlier.  After we have signalled that, we set P_WEXIT and
  then wait for any processes with a hold on the vmspace via PHOLD to
  release it.  PHOLD now KASSERT()'s that P_WEXIT is clear when it is
  invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
  to zero.
- Change proc_rwmem() to require that the processing read from has its
  vmspace held via PHOLD by the caller and get rid of all the junk to
  screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
  doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
  FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
  to clear an earlier single-step simualted via a breakpoint).  We only
  do one to avoid races.  Also, by making the EINVAL error for unknown
  requests be part of the default: case in the switch, the various
  switch cases can now just break out to return which removes a _lot_ of
  duplicated PRELE and proc unlocks, etc.  Also, it fixes at least one bug
  where a LWP ptrace command could return EINVAL with the proc lock still
  held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
  ptrace_clear_single_step() to always be called with the proc lock
  held (it was a mixed bag previously).  Alpha and arm have to drop
  the lock while the mess around with breakpoints, but other archs
  avoid extra lock release/acquires in ptrace().  I did have to fix a
  couple of other consumers in kern_kse and a few other places to
  hold the proc lock and PHOLD.

Tested by:	ps (1 mostly, but some bits of 2-4 as well)
MFC after:	1 week
2006-02-22 18:57:50 +00:00
Tor Egge
6bd7e81d83 Rounding addr upwards to next 4M or 2M boundary in pmap_growkernel() could
cause addr to become 0, resulting in an early return without populating
the last PDE.

Reviewed by:	alc
2006-02-16 22:10:57 +00:00
David Malone
0cbae93607 It seems bit 5 of cpu_feature2 is the VMX (Virtual Machine Extensions)
bit. While I'm here, delete a comment that was cut and past from the
cpu_features code that doesn't belong here.
2006-02-15 14:48:59 +00:00
Poul-Henning Kamp
e8444a7e6f CPU time accounting speedup (step 2)
Keep accounting time (in per-cpu) cputicks and the statistics counts
in the thread and summarize into struct proc when at context switch.

Don't reach across CPUs in calcru().

Add code to calibrate the top speed of cpu_tickrate() for variable
cpu_tick hardware (like TSC on power managed machines).

Don't enforce monotonicity (at least for now) in calcru.  While the
calibrated cpu_tickrate ramps up it may not be true.

Use 27MHz counter on i386/Geode.

Use TSC on amd64 & i386 if present.

Use tick counter on sparc64
2006-02-11 09:33:07 +00:00
Rink Springer
424d9b482d Cleaned the memory initialization up, moved some defines from the framebuffer
to an include file.

Reviewed by:		imp
Approved by:		imp (mentor)
2006-02-10 18:48:22 +00:00
Yaroslav Tykhiy
84d8f1b027 Avoid calling CPUID function 0x02 if the CPU reports no support for
it.  The former code used to hang older Intel CPUs by trying to get
non-existent TLB info 2^32 times.

Reduce code duplication around the calls to CPUID 0x02 by using
do-while loops.

PR:		i386/92977
Tested by:	cy
2006-02-09 09:10:54 +00:00
Poul-Henning Kamp
eb2da9a51f Simplify system time accounting for profiling.
Rename struct thread's td_sticks to td_pticks, we will need the
other name for more appropriately named use shortly.  Reduce it
from uint64_t to u_int.

Clear td_pticks whenever we enter the kernel instead of recording
its value as reference for userret().  Use the absolute value of
td->pticks in userret() and eliminate third argument.
2006-02-08 08:09:17 +00:00
Poul-Henning Kamp
5b1a8eb397 Modify the way we account for CPU time spent (step 1)
Keep track of time spent by the cpu in various contexts in units of
"cputicks" and scale to real-world microsec^H^H^H^H^H^H^H^Hclock_t
only when somebody wants to inspect the numbers.

For now "cputicks" are still derived from the current timecounter
and therefore things should by definition remain sensible also on
SMP machines.  (The main reason for this first milestone commit is
to verify that hypothesis.)

On slower machines, the avoided multiplications to normalize timestams
at every context switch, comes out as a 5-7% better score on the
unixbench/context1 microbenchmark.  On more modern hardware no change
in performance is seen.
2006-02-07 21:22:02 +00:00
Robert Watson
ce41b52994 Regenerate. 2006-02-06 22:15:00 +00:00