[haiku-commits] BRANCH mmu_man-github.sam460ex [9346f5ebceab] in src/system: kernel/arch/ppc/paging/classic kernel/arch/ppc boot/platform/u-boot kernel/arch/ppc/paging

  • From: mmu_man-github.sam460ex <community@xxxxxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sat, 21 May 2016 00:00:42 +0200 (CEST)

added 23 changesets to branch 'refs/remotes/mmu_man-github/sam460ex'
old head: 85ffdda742585e0374f4e57f260f9e7407087af5
new head: 9346f5ebceab0c435fe88d3abb282235096fe90c
overview: https://github.com/mmuman/haiku/compare/85ffdda74258...9346f5ebceab

----------------------------------------------------------------------------

77fa119b50d9: PPC: Restructure paging stuff to match other platforms
  
  First attempt.
  
  Totally untested.

ce686ba95793: PPC: Split cpu-specific files into separate objects

8554d8ba8ae6: PPC: rename arch_exceptions_44x.S to arch_exceptions_440.S

fdef69f48617: PPC: compile arch_exception*.S in cpu-specific objects

5cf56f6478a3: ppc/paging: Convert to new-style CPU management
  
  * Aka, post-scheduler changes
  * Luckily PPC paging code is very simular to x86 paging now

7cfdb69e24e4: PPC: u-boot: define bootloader stack in linker script
  
  Align with ARM stuff from hrev47834. It's not yet used though.

404b1d988f80: PPC: U-Boot: fix gUBootOS offset
  
  Since the removal of some other variables we were overwriting some random 
function.

67c989ec243c: PPC: Use FUNCTION_END in arch_asm.S

a9e30a4a4965: PPC: arch_asm.S: style fix
  
  Capitalize TODO & FIXME, gedit prefers those

e1cb2b73073d: PPC: add stub for arch_int_assign_to_cpu

74536e829666: PPC: add stub for arch_smp_send_multicast_ici

2e54831a5add: PPC: call debug_uart_from_fdt with C++ linkage

7528c1adce3c: U-Boot: PPC: make the shift calculation more obvious
  
  It's the 11th bit, counting from the MSB, on the top 16 bits.

9dec4c56fc84: U-Boot: PPC: Try to enable unaligned transfers
  
  This however doesn't help with the 64bit float operations that
  gcc emits when assigning the physical framebuffer address in kernel_args,
  which is a packed struct...

c1d25b488ed7: PPC: make arch_framebuffer.h more like ARM version
  
  Now the only difference is the physical address, which is returned as
  phys_addr_t as should be.

ec7022826a34: HACK: U-Boot: add a GraphicsDevice arch_video prober + hardcode 
Sam460
  
  Sadly even digging the RAM for a valid GraphicsDevice struct fails
  on my Sam460 board... so for now I hardcode the address anyway.
  
  TODO: clean this mess

62ab4262f6da: U-Boot: PPC: dump the start of the board_data struct
  
  Not so useful but just in case...
  
  Sadly, this struct is both compile-time and arch dependent :-(

d3239334e526: PPC: work around a gcc/binutils bug on DEBUG build
  
  cf. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=37758

9ce0c271426b: HACK: PPC: stub out arch_cpu_user_{memcpy,memset,strlcpy}
  
  Someone should write the PPC asm for this...

973323571170: HACK: PPC: work around some atomic issue
  
  TODO: Is it still needed?

f652143b303f: HACK: PPC: work around some atomic issue
  
  TODO: Is this still needed?

84f5368be0cd: FIXME: (disabled) PPC: add generic atomic implementation

9346f5ebceab: HACK: PPC: add some build flags to try to work around alignment 
issue
  
  didn't really work...

                                          [ François Revol <revol@xxxxxxx> ]

----------------------------------------------------------------------------

30 files changed, 3286 insertions(+), 640 deletions(-)
build/jam/ArchitectureRules                      |   10 +
src/system/boot/arch/ppc/arch_framebuffer.h      |   11 +-
src/system/boot/platform/u-boot/Jamfile          |    1 +
.../boot/platform/u-boot/arch/ppc/arch_cpu.cpp   |   13 +-
src/system/boot/platform/u-boot/arch/ppc/shell.S |    5 +-
src/system/boot/platform/u-boot/start.cpp        |   22 +
.../boot/platform/u-boot/video_gd_probe.cpp      |  327 +++++
src/system/kernel/arch/ppc/Jamfile               |   40 +-
src/system/kernel/arch/ppc/arch_asm.S            |  152 +-
...ch_exceptions_44x.S => arch_exceptions_440.S} |   38 +-
src/system/kernel/arch/ppc/arch_int.cpp          |    7 +
src/system/kernel/arch/ppc/arch_platform.cpp     |    2 +-
src/system/kernel/arch/ppc/arch_smp.cpp          |    7 +
.../kernel/arch/ppc/arch_vm_translation_map.cpp  |  672 +--------
.../kernel/arch/ppc/paging/PPCPagingMethod.cpp   |   15 +
.../kernel/arch/ppc/paging/PPCPagingMethod.h     |   46 +
.../arch/ppc/paging/PPCPagingStructures.cpp      |   19 +
.../kernel/arch/ppc/paging/PPCPagingStructures.h |   52 +
.../arch/ppc/paging/PPCVMTranslationMap.cpp      |  149 ++
.../kernel/arch/ppc/paging/PPCVMTranslationMap.h |   60 +
.../paging/classic/PPCPagingMethodClassic.cpp    |  423 ++++++
.../ppc/paging/classic/PPCPagingMethodClassic.h  |  208 +++
.../classic/PPCPagingStructuresClassic.cpp       |  141 ++
.../paging/classic/PPCPagingStructuresClassic.h  |   33 +
.../classic/PPCVMTranslationMapClassic.cpp       | 1365 ++++++++++++++++++
.../paging/classic/PPCVMTranslationMapClassic.h  |   81 ++
src/system/kernel/lib/arch/ppc/Jamfile           |    1 +
src/system/kernel/main.cpp                       |    8 +-
src/system/kernel/smp.cpp                        |    8 +-
src/system/ldscripts/ppc/boot_loader_u-boot.ld   |   10 +

############################################################################

Commit:      77fa119b50d9a52050faf04ea051353a7664ea38
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Nov  8 22:45:08 2013 UTC

PPC: Restructure paging stuff to match other platforms

First attempt.

Totally untested.

----------------------------------------------------------------------------

diff --git a/src/system/kernel/arch/ppc/Jamfile 
b/src/system/kernel/arch/ppc/Jamfile
index 63479fd..5659f6c 100644
--- a/src/system/kernel/arch/ppc/Jamfile
+++ b/src/system/kernel/arch/ppc/Jamfile
@@ -4,6 +4,8 @@ SubDirHdrs $(SUBDIR) $(DOTDOT) generic ;
 UsePrivateKernelHeaders ;
 
 SEARCH_SOURCE += [ FDirName $(SUBDIR) $(DOTDOT) generic ] ;
+SEARCH_SOURCE += [ FDirName $(SUBDIR) paging ] ;
+SEARCH_SOURCE += [ FDirName $(SUBDIR) paging classic ] ;
 
 KernelMergeObject kernel_arch_ppc.o :
        arch_commpage.cpp
@@ -29,9 +31,26 @@ KernelMergeObject kernel_arch_ppc.o :
        debug_uart_8250.cpp
        arch_uart_8250.cpp
 
+       # paging
        generic_vm_physical_page_mapper.cpp
        generic_vm_physical_page_ops.cpp
        GenericVMPhysicalPageMapper.cpp
+       PPCPagingMethod.cpp
+       PPCPagingStructures.cpp
+       PPCVMTranslationMap.cpp
+
+       # TODO: compile with correct -mcpu
+       # paging/classic
+       PPCPagingMethodClassic.cpp
+       PPCPagingStructuresClassic.cpp
+       PPCVMTranslationMapClassic.cpp
+
+       # TODO: compile with correct -mcpu
+       # paging/460
+       #PPCPagingMethod460.cpp
+       #PPCPagingStructures460.cpp
+       #PPCVMTranslationMap460.cpp
+
        :
        $(TARGET_KERNEL_PIC_CCFLAGS) -Wno-unused
 ;
diff --git a/src/system/kernel/arch/ppc/arch_vm_translation_map.cpp 
b/src/system/kernel/arch/ppc/arch_vm_translation_map.cpp
index 2c9565c..edb091c 100644
--- a/src/system/kernel/arch/ppc/arch_vm_translation_map.cpp
+++ b/src/system/kernel/arch/ppc/arch_vm_translation_map.cpp
@@ -79,7 +79,7 @@
 #include <KernelExport.h>
 
 #include <arch/cpu.h>
-#include <arch_mmu.h>
+//#include <arch_mmu.h>
 #include <boot/kernel_args.h>
 #include <int.h>
 #include <kernel.h>
@@ -93,38 +93,30 @@
 #include <util/AutoLock.h>
 
 #include "generic_vm_physical_page_mapper.h"
-#include "generic_vm_physical_page_ops.h"
-#include "GenericVMPhysicalPageMapper.h"
+//#include "generic_vm_physical_page_ops.h"
+//#include "GenericVMPhysicalPageMapper.h"
 
+#include "paging/PPCVMTranslationMap.h"
+#include "paging/classic/PPCPagingMethodClassic.h"
+//#include "paging/460/PPCPagingMethod460.h"
 
-static struct page_table_entry_group *sPageTable;
-static size_t sPageTableSize;
-static uint32 sPageTableHashMask;
-static area_id sPageTableArea;
 
-// 64 MB of iospace
-#define IOSPACE_SIZE (64*1024*1024)
-// We only have small (4 KB) pages. The only reason for choosing greater chunk
-// size is to keep the waste of memory limited, since the generic page mapper
-// allocates structures per physical/virtual chunk.
-// TODO: Implement a page mapper more suitable for small pages!
-#define IOSPACE_CHUNK_SIZE (16 * B_PAGE_SIZE)
-
-static addr_t sIOSpaceBase;
-
-static GenericVMPhysicalPageMapper sPhysicalPageMapper;
+#define TRACE_VM_TMAP
+#ifdef TRACE_VM_TMAP
+#      define TRACE(x...) dprintf(x)
+#else
+#      define TRACE(x...) ;
+#endif
 
-// The VSID is a 24 bit number. The lower three bits are defined by the
-// (effective) segment number, which leaves us with a 21 bit space of
-// VSID bases (= 2 * 1024 * 1024).
-#define MAX_VSID_BASES (PAGE_SIZE * 8)
-static uint32 sVSIDBaseBitmap[MAX_VSID_BASES / (sizeof(uint32) * 8)];
-static spinlock sVSIDBaseBitmapLock;
 
-#define VSID_BASE_SHIFT 3
-#define VADDR_TO_VSID(vsidBase, vaddr) (vsidBase + ((vaddr) >> 28))
+static union {
+       uint64  align;
+       //char  amcc460[sizeof(PPCPagingMethod460)];
+       char    classic[sizeof(PPCPagingMethodClassic)];
+} sPagingMethodBuffer;
 
 
+#if 0
 struct PPCVMTranslationMap : VMTranslationMap {
                                                                
PPCVMTranslationMap();
        virtual                                         ~PPCVMTranslationMap();
@@ -174,367 +166,20 @@ struct PPCVMTranslationMap : VMTranslationMap {
 protected:
                        int                                     fVSIDBase;
 };
+#endif
 
 
 void
 ppc_translation_map_change_asid(VMTranslationMap *map)
 {
-// this code depends on the kernel being at 0x80000000, fix if we change that
-#if KERNEL_BASE != 0x80000000
-#error fix me
-#endif
-       int vsidBase = static_cast<PPCVMTranslationMap*>(map)->VSIDBase();
-
-       isync();        // synchronize context
-       asm("mtsr       0,%0" : : "g"(vsidBase));
-       asm("mtsr       1,%0" : : "g"(vsidBase + 1));
-       asm("mtsr       2,%0" : : "g"(vsidBase + 2));
-       asm("mtsr       3,%0" : : "g"(vsidBase + 3));
-       asm("mtsr       4,%0" : : "g"(vsidBase + 4));
-       asm("mtsr       5,%0" : : "g"(vsidBase + 5));
-       asm("mtsr       6,%0" : : "g"(vsidBase + 6));
-       asm("mtsr       7,%0" : : "g"(vsidBase + 7));
-       isync();        // synchronize context
-}
-
-
-static void
-fill_page_table_entry(page_table_entry *entry, uint32 virtualSegmentID,
-       addr_t virtualAddress, phys_addr_t physicalAddress, uint8 protection,
-       uint32 memoryType, bool secondaryHash)
-{
-       // lower 32 bit - set at once
-       entry->physical_page_number = physicalAddress / B_PAGE_SIZE;
-       entry->_reserved0 = 0;
-       entry->referenced = false;
-       entry->changed = false;
-       entry->write_through = (memoryType == B_MTR_UC) || (memoryType == 
B_MTR_WT);
-       entry->caching_inhibited = (memoryType == B_MTR_UC);
-       entry->memory_coherent = false;
-       entry->guarded = false;
-       entry->_reserved1 = 0;
-       entry->page_protection = protection & 0x3;
-       eieio();
-               // we need to make sure that the lower 32 bit were
-               // already written when the entry becomes valid
-
-       // upper 32 bit
-       entry->virtual_segment_id = virtualSegmentID;
-       entry->secondary_hash = secondaryHash;
-       entry->abbr_page_index = (virtualAddress >> 22) & 0x3f;
-       entry->valid = true;
-
-       ppc_sync();
-}
-
-
-page_table_entry *
-PPCVMTranslationMap::LookupPageTableEntry(addr_t virtualAddress)
-{
-       // lookup the vsid based off the va
-       uint32 virtualSegmentID = VADDR_TO_VSID(fVSIDBase, virtualAddress);
-
-//     dprintf("vm_translation_map.lookup_page_table_entry: vsid %ld, va 
0x%lx\n", virtualSegmentID, virtualAddress);
-
-       // Search for the page table entry using the primary hash value
-
-       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
virtualAddress);
-       page_table_entry_group *group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int i = 0; i < 8; i++) {
-               page_table_entry *entry = &group->entry[i];
-
-               if (entry->virtual_segment_id == virtualSegmentID
-                       && entry->secondary_hash == false
-                       && entry->abbr_page_index == ((virtualAddress >> 22) & 
0x3f))
-                       return entry;
-       }
-
-       // didn't find it, try the secondary hash value
-
-       hash = page_table_entry::SecondaryHash(hash);
-       group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int i = 0; i < 8; i++) {
-               page_table_entry *entry = &group->entry[i];
-
-               if (entry->virtual_segment_id == virtualSegmentID
-                       && entry->secondary_hash == true
-                       && entry->abbr_page_index == ((virtualAddress >> 22) & 
0x3f))
-                       return entry;
-       }
-
-       return NULL;
-}
-
-
-bool
-PPCVMTranslationMap::RemovePageTableEntry(addr_t virtualAddress)
-{
-       page_table_entry *entry = LookupPageTableEntry(virtualAddress);
-       if (entry == NULL)
-               return false;
-
-       entry->valid = 0;
-       ppc_sync();
-       tlbie(virtualAddress);
-       eieio();
-       tlbsync();
-       ppc_sync();
-
-       return true;
-}
-
-
-static status_t
-map_iospace_chunk(addr_t va, phys_addr_t pa, uint32 flags)
-{
-       pa &= ~(B_PAGE_SIZE - 1); // make sure it's page aligned
-       va &= ~(B_PAGE_SIZE - 1); // make sure it's page aligned
-       if (va < sIOSpaceBase || va >= (sIOSpaceBase + IOSPACE_SIZE))
-               panic("map_iospace_chunk: passed invalid va 0x%lx\n", va);
-
-       // map the pages
-       return ppc_map_address_range(va, pa, IOSPACE_CHUNK_SIZE);
+       static_cast<PPCVMTranslationMap*>(map)->ChangeASID();
 }
 
 
 // #pragma mark -
 
 
-PPCVMTranslationMap::PPCVMTranslationMap()
-{
-}
-
-
-PPCVMTranslationMap::~PPCVMTranslationMap()
-{
-       if (fMapCount > 0) {
-               panic("vm_translation_map.destroy_tmap: map %p has positive map 
count %ld\n",
-                       this, fMapCount);
-       }
-
-       // mark the vsid base not in use
-       int baseBit = fVSIDBase >> VSID_BASE_SHIFT;
-       atomic_and((int32 *)&sVSIDBaseBitmap[baseBit / 32],
-                       ~(1 << (baseBit % 32)));
-}
-
-
-status_t
-PPCVMTranslationMap::Init(bool kernel)
-{
-       cpu_status state = disable_interrupts();
-       acquire_spinlock(&sVSIDBaseBitmapLock);
-
-       // allocate a VSID base for this one
-       if (kernel) {
-               // The boot loader has set up the segment registers for 
identical
-               // mapping. Two VSID bases are reserved for the kernel: 0 and 
8. The
-               // latter one for mapping the kernel address space 
(0x80000000...), the
-               // former one for the lower addresses required by the Open 
Firmware
-               // services.
-               fVSIDBase = 0;
-               sVSIDBaseBitmap[0] |= 0x3;
-       } else {
-               int i = 0;
-
-               while (i < MAX_VSID_BASES) {
-                       if (sVSIDBaseBitmap[i / 32] == 0xffffffff) {
-                               i += 32;
-                               continue;
-                       }
-                       if ((sVSIDBaseBitmap[i / 32] & (1 << (i % 32))) == 0) {
-                               // we found it
-                               sVSIDBaseBitmap[i / 32] |= 1 << (i % 32);
-                               break;
-                       }
-                       i++;
-               }
-               if (i >= MAX_VSID_BASES)
-                       panic("vm_translation_map_create: out of VSID bases\n");
-               fVSIDBase = i << VSID_BASE_SHIFT;
-       }
-
-       release_spinlock(&sVSIDBaseBitmapLock);
-       restore_interrupts(state);
-
-       return B_OK;
-}
-
-
-bool
-PPCVMTranslationMap::Lock()
-{
-       recursive_lock_lock(&fLock);
-       return true;
-}
-
-
-void
-PPCVMTranslationMap::Unlock()
-{
-       recursive_lock_unlock(&fLock);
-}
-
-
-size_t
-PPCVMTranslationMap::MaxPagesNeededToMap(addr_t start, addr_t end) const
-{
-       return 0;
-}
-
-
-status_t
-PPCVMTranslationMap::Map(addr_t virtualAddress, phys_addr_t physicalAddress,
-       uint32 attributes, uint32 memoryType, vm_page_reservation* reservation)
-{
-       // lookup the vsid based off the va
-       uint32 virtualSegmentID = VADDR_TO_VSID(fVSIDBase, virtualAddress);
-       uint32 protection = 0;
-
-       // ToDo: check this
-       // all kernel mappings are R/W to supervisor code
-       if (attributes & (B_READ_AREA | B_WRITE_AREA))
-               protection = (attributes & B_WRITE_AREA) ? PTE_READ_WRITE : 
PTE_READ_ONLY;
-
-       //dprintf("vm_translation_map.map_tmap: vsid %d, pa 0x%lx, va 0x%lx\n", 
vsid, pa, va);
-
-       // Search for a free page table slot using the primary hash value
-
-       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
virtualAddress);
-       page_table_entry_group *group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int i = 0; i < 8; i++) {
-               page_table_entry *entry = &group->entry[i];
-
-               if (entry->valid)
-                       continue;
-
-               fill_page_table_entry(entry, virtualSegmentID, virtualAddress, 
physicalAddress,
-                       protection, memoryType, false);
-               fMapCount++;
-               return B_OK;
-       }
-
-       // Didn't found one, try the secondary hash value
-
-       hash = page_table_entry::SecondaryHash(hash);
-       group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int i = 0; i < 8; i++) {
-               page_table_entry *entry = &group->entry[i];
-
-               if (entry->valid)
-                       continue;
-
-               fill_page_table_entry(entry, virtualSegmentID, virtualAddress, 
physicalAddress,
-                       protection, memoryType, false);
-               fMapCount++;
-               return B_OK;
-       }
-
-       panic("vm_translation_map.map_tmap: hash table full\n");
-       return B_ERROR;
-}
-
-
-status_t
-PPCVMTranslationMap::Unmap(addr_t start, addr_t end)
-{
-       page_table_entry *entry;
-
-       start = ROUNDDOWN(start, B_PAGE_SIZE);
-       end = ROUNDUP(end, B_PAGE_SIZE);
-
-//     dprintf("vm_translation_map.unmap_tmap: start 0x%lx, end 0x%lx\n", 
start, end);
-
-       while (start < end) {
-               if (RemovePageTableEntry(start))
-                       fMapCount--;
-
-               start += B_PAGE_SIZE;
-       }
-
-       return B_OK;
-}
-
-
-status_t
-PPCVMTranslationMap::UnmapPage(VMArea* area, addr_t address,
-       bool updatePageQueue)
-{
-       ASSERT(address % B_PAGE_SIZE == 0);
-
-       RecursiveLocker locker(fLock);
-
-       if (area->cache_type == CACHE_TYPE_DEVICE) {
-               if (!RemovePageTableEntry(address))
-                       return B_ENTRY_NOT_FOUND;
-
-               fMapCount--;
-               return B_OK;
-       }
-
-       page_table_entry* entry = LookupPageTableEntry(address);
-       if (entry == NULL)
-               return B_ENTRY_NOT_FOUND;
-
-       page_num_t pageNumber = entry->physical_page_number;
-       bool accessed = entry->referenced;
-       bool modified = entry->changed;
-
-       RemovePageTableEntry(address);
-
-       fMapCount--;
-
-       locker.Detach();
-               // PageUnmapped() will unlock for us
-
-       PageUnmapped(area, pageNumber, accessed, modified, updatePageQueue);
-
-       return B_OK;
-}
-
-
-status_t
-PPCVMTranslationMap::Query(addr_t va, phys_addr_t *_outPhysical,
-       uint32 *_outFlags)
-{
-       page_table_entry *entry;
-
-       // default the flags to not present
-       *_outFlags = 0;
-       *_outPhysical = 0;
-
-       entry = LookupPageTableEntry(va);
-       if (entry == NULL)
-               return B_NO_ERROR;
-
-       // ToDo: check this!
-       if (IS_KERNEL_ADDRESS(va))
-               *_outFlags |= B_KERNEL_READ_AREA | (entry->page_protection == 
PTE_READ_ONLY ? 0 : B_KERNEL_WRITE_AREA);
-       else
-               *_outFlags |= B_KERNEL_READ_AREA | B_KERNEL_WRITE_AREA | 
B_READ_AREA | (entry->page_protection == PTE_READ_ONLY ? 0 : B_WRITE_AREA);
-
-       *_outFlags |= entry->changed ? PAGE_MODIFIED : 0;
-       *_outFlags |= entry->referenced ? PAGE_ACCESSED : 0;
-       *_outFlags |= entry->valid ? PAGE_PRESENT : 0;
-
-       *_outPhysical = entry->physical_page_number * B_PAGE_SIZE;
-
-       return B_OK;
-}
-
-
-status_t
-PPCVMTranslationMap::QueryInterrupt(addr_t virtualAddress,
-       phys_addr_t* _physicalAddress, uint32* _flags)
-{
-       return PPCVMTranslationMap::Query(virtualAddress, _physicalAddress, 
_flags);
-}
-
-
+#if 0//XXX:Not needed anymore ?
 addr_t
 PPCVMTranslationMap::MappedSize() const
 {
@@ -542,103 +187,6 @@ PPCVMTranslationMap::MappedSize() const
 }
 
 
-status_t
-PPCVMTranslationMap::Protect(addr_t base, addr_t top, uint32 attributes,
-       uint32 memoryType)
-{
-       // XXX finish
-       return B_ERROR;
-}
-
-
-status_t
-PPCVMTranslationMap::ClearFlags(addr_t virtualAddress, uint32 flags)
-{
-       page_table_entry *entry = LookupPageTableEntry(virtualAddress);
-       if (entry == NULL)
-               return B_NO_ERROR;
-
-       bool modified = false;
-
-       // clear the bits
-       if (flags & PAGE_MODIFIED && entry->changed) {
-               entry->changed = false;
-               modified = true;
-       }
-       if (flags & PAGE_ACCESSED && entry->referenced) {
-               entry->referenced = false;
-               modified = true;
-       }
-
-       // synchronize
-       if (modified) {
-               tlbie(virtualAddress);
-               eieio();
-               tlbsync();
-               ppc_sync();
-       }
-
-       return B_OK;
-}
-
-
-bool
-PPCVMTranslationMap::ClearAccessedAndModified(VMArea* area, addr_t address,
-       bool unmapIfUnaccessed, bool& _modified)
-{
-       // TODO: Implement for real! ATM this is just an approximation using
-       // Query(), ClearFlags(), and UnmapPage(). See below!
-
-       RecursiveLocker locker(fLock);
-
-       uint32 flags;
-       phys_addr_t physicalAddress;
-       if (Query(address, &physicalAddress, &flags) != B_OK
-               || (flags & PAGE_PRESENT) == 0) {
-               return false;
-       }
-
-       _modified = (flags & PAGE_MODIFIED) != 0;
-
-       if ((flags & (PAGE_ACCESSED | PAGE_MODIFIED)) != 0)
-               ClearFlags(address, flags & (PAGE_ACCESSED | PAGE_MODIFIED));
-
-       if ((flags & PAGE_ACCESSED) != 0)
-               return true;
-
-       if (!unmapIfUnaccessed)
-               return false;
-
-       locker.Unlock();
-
-       UnmapPage(area, address, false);
-               // TODO: Obvious race condition: Between querying and unmapping 
the
-               // page could have been accessed. We try to compensate by 
considering
-               // vm_page::{accessed,modified} (which would have been updated 
by
-               // UnmapPage()) below, but that doesn't quite match the required
-               // semantics of the method.
-
-       vm_page* page = vm_lookup_page(physicalAddress / B_PAGE_SIZE);
-       if (page == NULL)
-               return false;
-
-       _modified |= page->modified;
-
-       return page->accessed;
-}
-
-
-void
-PPCVMTranslationMap::Flush()
-{
-// TODO: arch_cpu_global_TLB_invalidate() is extremely expensive and doesn't
-// even cut it here. We are supposed to invalidate all TLB entries for this
-// map on all CPUs. We should loop over the virtual pages and invoke tlbie
-// instead (which marks the entry invalid on all CPUs).
-       arch_cpu_global_TLB_invalidate();
-}
-
-
 static status_t
 get_physical_page_tmap(phys_addr_t physicalAddress, addr_t *_virtualAddress,
        void **handle)
@@ -652,6 +200,7 @@ put_physical_page_tmap(addr_t virtualAddress, void *handle)
 {
        return generic_put_physical_page(virtualAddress);
 }
+#endif
 
 
 //  #pragma mark -
@@ -661,18 +210,7 @@ put_physical_page_tmap(addr_t virtualAddress, void *handle)
 status_t
 arch_vm_translation_map_create_map(bool kernel, VMTranslationMap** _map)
 {
-       PPCVMTranslationMap* map = new(std::nothrow) PPCVMTranslationMap;
-       if (map == NULL)
-               return B_NO_MEMORY;
-
-       status_t error = map->Init(kernel);
-       if (error != B_OK) {
-               delete map;
-               return error;
-       }
-
-       *_map = map;
-       return B_OK;
+       return gPPCPagingMethod->CreateTranslationMap(kernel, _map);
 }
 
 
@@ -680,60 +218,52 @@ status_t
 arch_vm_translation_map_init(kernel_args *args,
        VMPhysicalPageMapper** _physicalPageMapper)
 {
-       sPageTable = (page_table_entry_group *)args->arch_args.page_table.start;
-       sPageTableSize = args->arch_args.page_table.size;
-       sPageTableHashMask = sPageTableSize / sizeof(page_table_entry_group) - 
1;
+       TRACE("vm_translation_map_init: entry\n");
 
-       // init physical page mapper
-       status_t error = generic_vm_physical_page_mapper_init(args,
-               map_iospace_chunk, &sIOSpaceBase, IOSPACE_SIZE, 
IOSPACE_CHUNK_SIZE);
-       if (error != B_OK)
-               return error;
+#ifdef TRACE_VM_TMAP
+       TRACE("physical memory ranges:\n");
+       for (uint32 i = 0; i < args->num_physical_memory_ranges; i++) {
+               phys_addr_t start = args->physical_memory_range[i].start;
+               phys_addr_t end = start + args->physical_memory_range[i].size;
+               TRACE("  %#10" B_PRIxPHYSADDR " - %#10" B_PRIxPHYSADDR "\n", 
start,
+                       end);
+       }
 
-       new(&sPhysicalPageMapper) GenericVMPhysicalPageMapper;
+       TRACE("allocated physical ranges:\n");
+       for (uint32 i = 0; i < args->num_physical_allocated_ranges; i++) {
+               phys_addr_t start = args->physical_allocated_range[i].start;
+               phys_addr_t end = start + 
args->physical_allocated_range[i].size;
+               TRACE("  %#10" B_PRIxPHYSADDR " - %#10" B_PRIxPHYSADDR "\n", 
start,
+                       end);
+       }
 
-       *_physicalPageMapper = &sPhysicalPageMapper;
-       return B_OK;
+       TRACE("allocated virtual ranges:\n");
+       for (uint32 i = 0; i < args->num_virtual_allocated_ranges; i++) {
+               addr_t start = args->virtual_allocated_range[i].start;
+               addr_t end = start + args->virtual_allocated_range[i].size;
+               TRACE("  %#10" B_PRIxADDR " - %#10" B_PRIxADDR "\n", start, 
end);
+       }
+#endif
+
+       if (false /* TODO:Check for AMCC460! */) {
+               dprintf("using AMCC 460 paging\n");
+               panic("XXX");
+               //XXX:gPPCPagingMethod = new(&sPagingMethodBuffer) 
PPCPagingMethod460;
+       } else {
+               dprintf("using Classic paging\n");
+               gPPCPagingMethod = new(&sPagingMethodBuffer) 
PPCPagingMethodClassic;
+       }
+
+       return gPPCPagingMethod->Init(args, _physicalPageMapper);
 }
 
 
 status_t
 arch_vm_translation_map_init_post_area(kernel_args *args)
 {
-       // If the page table doesn't lie within the kernel address space, we
-       // remap it.
-       if (!IS_KERNEL_ADDRESS(sPageTable)) {
-               addr_t newAddress = (addr_t)sPageTable;
-               status_t error = ppc_remap_address_range(&newAddress, 
sPageTableSize,
-                       false);
-               if (error != B_OK) {
-                       panic("arch_vm_translation_map_init_post_area(): Failed 
to remap "
-                               "the page table!");
-                       return error;
-               }
+       TRACE("vm_translation_map_init_post_area: entry\n");
 
-               // set the new page table address
-               addr_t oldVirtualBase = (addr_t)(sPageTable);
-               sPageTable = (page_table_entry_group*)newAddress;
-
-               // unmap the old pages
-               ppc_unmap_address_range(oldVirtualBase, sPageTableSize);
-
-// TODO: We should probably map the page table via BAT. It is relatively large,
-// and due to being a hash table the access patterns might look sporadic, which
-// certainly isn't to the liking of the TLB.
-       }
-
-       // create an area to cover the page table
-       sPageTableArea = create_area("page_table", (void **)&sPageTable, 
B_EXACT_ADDRESS,
-               sPageTableSize, B_ALREADY_WIRED, B_KERNEL_READ_AREA | 
B_KERNEL_WRITE_AREA);
-
-       // init physical page mapper
-       status_t error = generic_vm_physical_page_mapper_init_post_area(args);
-       if (error != B_OK)
-               return error;
-
-       return B_OK;
+       return gPPCPagingMethod->InitPostArea(args);
 }
 
 
@@ -752,38 +282,13 @@ arch_vm_translation_map_init_post_sem(kernel_args *args)
  */
 
 status_t
-arch_vm_translation_map_early_map(kernel_args *ka, addr_t virtualAddress,
-       phys_addr_t physicalAddress, uint8 attributes,
-       phys_addr_t (*get_free_page)(kernel_args *))
+arch_vm_translation_map_early_map(kernel_args *args, addr_t va, phys_addr_t pa,
+       uint8 attributes, phys_addr_t (*get_free_page)(kernel_args *))
 {
-       uint32 virtualSegmentID = get_sr((void *)virtualAddress) & 0xffffff;
-
-       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
(uint32)virtualAddress);
-       page_table_entry_group *group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int32 i = 0; i < 8; i++) {
-               // 8 entries in a group
-               if (group->entry[i].valid)
-                       continue;
+       TRACE("early_tmap: entry pa %#" B_PRIxPHYSADDR " va %#" B_PRIxADDR 
"\n", pa,
+               va);
 
-               fill_page_table_entry(&group->entry[i], virtualSegmentID,
-                       virtualAddress, physicalAddress, PTE_READ_WRITE, 0, 
false);
-               return B_OK;
-       }
-
-       hash = page_table_entry::SecondaryHash(hash);
-       group = &sPageTable[hash & sPageTableHashMask];
-
-       for (int32 i = 0; i < 8; i++) {
-               if (group->entry[i].valid)
-                       continue;
-
-               fill_page_table_entry(&group->entry[i], virtualSegmentID,
-                       virtualAddress, physicalAddress, PTE_READ_WRITE, 0, 
true);
-               return B_OK;
-       }
-
-       return B_ERROR;
+       return gPPCPagingMethod->MapEarly(args, va, pa, attributes, 
get_free_page);
 }
 
 
@@ -840,46 +345,19 @@ ppc_unmap_address_range(addr_t virtualAddress, size_t 
size)
 
        PPCVMTranslationMap* map = static_cast<PPCVMTranslationMap*>(
                addressSpace->TranslationMap());
-       for (0; virtualAddress < virtualEnd; virtualAddress += B_PAGE_SIZE)
-               map->RemovePageTableEntry(virtualAddress);
+       map->Unmap(virtualAddress, virtualEnd);
 }
 
 
 status_t
 ppc_remap_address_range(addr_t *_virtualAddress, size_t size, bool unmap)
 {
-       addr_t virtualAddress = ROUNDDOWN(*_virtualAddress, B_PAGE_SIZE);
-       size = ROUNDUP(*_virtualAddress + size - virtualAddress, B_PAGE_SIZE);
-
        VMAddressSpace *addressSpace = VMAddressSpace::Kernel();
 
-       // reserve space in the address space
-       void *newAddress = NULL;
-       status_t error = vm_reserve_address_range(addressSpace->ID(), 
&newAddress,
-               B_ANY_KERNEL_ADDRESS, size, B_KERNEL_READ_AREA | 
B_KERNEL_WRITE_AREA);
-       if (error != B_OK)
-               return error;
-
-       // get the area's first physical page
        PPCVMTranslationMap* map = static_cast<PPCVMTranslationMap*>(
                addressSpace->TranslationMap());
-       page_table_entry *entry = map->LookupPageTableEntry(virtualAddress);
-       if (!entry)
-               return B_ERROR;
-       phys_addr_t physicalBase = (phys_addr_t)entry->physical_page_number << 
12;
-
-       // map the pages
-       error = ppc_map_address_range((addr_t)newAddress, physicalBase, size);
-       if (error != B_OK)
-               return error;
-
-       *_virtualAddress = (addr_t)newAddress;
 
-       // unmap the old pages
-       if (unmap)
-               ppc_unmap_address_range(virtualAddress, size);
-
-       return B_OK;
+       return map->RemapAddressRange(_virtualAddress, size, unmap);
 }
 
 
@@ -887,20 +365,8 @@ bool
 arch_vm_translation_map_is_kernel_page_accessible(addr_t virtualAddress,
        uint32 protection)
 {
-       VMAddressSpace *addressSpace = VMAddressSpace::Kernel();
-
-       PPCVMTranslationMap* map = static_cast<PPCVMTranslationMap*>(
-               addressSpace->TranslationMap());
-
-       phys_addr_t physicalAddress;
-       uint32 flags;
-       if (map->Query(virtualAddress, &physicalAddress, &flags) != B_OK)
-               return false;
-
-       if ((flags & PAGE_PRESENT) == 0)
-               return false;
+       if (!gPPCPagingMethod)
+               return true;
 
-       // present means kernel-readable, so check for writable
-       return (protection & B_KERNEL_WRITE_AREA) == 0
-               || (flags & B_KERNEL_WRITE_AREA) != 0;
+       return gPPCPagingMethod->IsKernelPageAccessible(virtualAddress, 
protection);
 }
diff --git a/src/system/kernel/arch/ppc/paging/PPCPagingMethod.cpp 
b/src/system/kernel/arch/ppc/paging/PPCPagingMethod.cpp
new file mode 100644
index 0000000..5f0b6a2
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCPagingMethod.cpp
@@ -0,0 +1,15 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+
+
+#include "paging/PPCPagingMethod.h"
+
+
+PPCPagingMethod* gPPCPagingMethod;
+
+
+PPCPagingMethod::~PPCPagingMethod()
+{
+}
diff --git a/src/system/kernel/arch/ppc/paging/PPCPagingMethod.h 
b/src/system/kernel/arch/ppc/paging/PPCPagingMethod.h
new file mode 100644
index 0000000..9ef4846
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCPagingMethod.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+#ifndef KERNEL_ARCH_PPC_PAGING_PPC_PAGING_METHOD_H
+#define KERNEL_ARCH_PPC_PAGING_PPC_PAGING_METHOD_H
+
+
+#include <SupportDefs.h>
+
+#include <vm/vm_types.h>
+
+
+struct kernel_args;
+struct VMPhysicalPageMapper;
+struct VMTranslationMap;
+
+
+class PPCPagingMethod {
+public:
+       virtual                                         ~PPCPagingMethod();
+
+       virtual status_t                        Init(kernel_args* args,
+                                                                       
VMPhysicalPageMapper** _physicalPageMapper)
+                                                                               
= 0;
+       virtual status_t                        InitPostArea(kernel_args* args) 
= 0;
+
+       virtual status_t                        CreateTranslationMap(bool 
kernel,
+                                                                       
VMTranslationMap** _map) = 0;
+
+       virtual status_t                        MapEarly(kernel_args* args,
+                                                                       addr_t 
virtualAddress,
+                                                                       
phys_addr_t physicalAddress,
+                                                                       uint8 
attributes,
+                                                                       
page_num_t (*get_free_page)(kernel_args*))
+                                                                               
= 0;
+
+       virtual bool                            IsKernelPageAccessible(addr_t 
virtualAddress,
+                                                                       uint32 
protection) = 0;
+};
+
+
+extern PPCPagingMethod* gPPCPagingMethod;
+
+
+#endif // KERNEL_ARCH_PPC_PAGING_PPC_PAGING_METHOD_H
diff --git a/src/system/kernel/arch/ppc/paging/PPCPagingStructures.cpp 
b/src/system/kernel/arch/ppc/paging/PPCPagingStructures.cpp
new file mode 100644
index 0000000..faf089c
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCPagingStructures.cpp
@@ -0,0 +1,20 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+
+
+#include "paging/PPCPagingStructures.h"
+
+
+PPCPagingStructures::PPCPagingStructures()
+       :
+       ref_count(1),
+       active_on_cpus(0)
+{
+}
+
+
+PPCPagingStructures::~PPCPagingStructures()
+{
+}
diff --git a/src/system/kernel/arch/ppc/paging/PPCPagingStructures.h 
b/src/system/kernel/arch/ppc/paging/PPCPagingStructures.h
new file mode 100644
index 0000000..d1e26a3
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCPagingStructures.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2005-2009, Axel Dörfler, axeld@xxxxxxxxxxxxxxxx.
+ * Distributed under the terms of the MIT License.
+ *
+ * Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
+ * Distributed under the terms of the NewOS License.
+ */
+#ifndef KERNEL_ARCH_PPC_PAGING_PPC_PAGING_STRUCTURES_H
+#define KERNEL_ARCH_PPC_PAGING_PPC_PAGING_STRUCTURES_H
+
+
+#include <SupportDefs.h>
+
+#include <heap.h>
+
+
+struct PPCPagingStructures : DeferredDeletable {
+       // X86 stuff, probably useless
+       phys_addr_t                                     pgdir_phys;
+       int32                                           ref_count;
+       int32                                           active_on_cpus;
+               // mask indicating on which CPUs the map is currently used
+
+                                                               
PPCPagingStructures();
+       virtual                                         ~PPCPagingStructures();
+
+       inline  void                            AddReference();
+       inline  void                            RemoveReference();
+
+       virtual void                            Delete() = 0;
+};
+
+
+inline void
+PPCPagingStructures::AddReference()
+{
+       atomic_add(&ref_count, 1);
+}
+
+
+inline void
+PPCPagingStructures::RemoveReference()
+{
+       if (atomic_add(&ref_count, -1) == 1)
+               Delete();
+}
+
+
+#endif // KERNEL_ARCH_PPC_PAGING_PPC_PAGING_STRUCTURES_H
diff --git a/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.cpp 
b/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.cpp
new file mode 100644
index 0000000..85c7a05
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.cpp
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2008-2011, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2002-2007, Axel Dörfler, axeld@xxxxxxxxxxxxxxxx. All rights 
reserved.
+ * Distributed under the terms of the MIT License.
+ *
+ * Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
+ * Distributed under the terms of the NewOS License.
+ */
+
+
+#include "paging/PPCVMTranslationMap.h"
+
+#include <thread.h>
+#include <smp.h>
+
+#include "paging/PPCPagingStructures.h"
+
+
+//#define TRACE_PPC_VM_TRANSLATION_MAP
+#ifdef TRACE_PPC_VM_TRANSLATION_MAP
+#      define TRACE(x...) dprintf(x)
+#else
+#      define TRACE(x...) ;
+#endif
+
+
+PPCVMTranslationMap::PPCVMTranslationMap()
+       :
+       //X86:fPageMapper(NULL),
+       fInvalidPagesCount(0)
+{
+}
+
+
+PPCVMTranslationMap::~PPCVMTranslationMap()
+{
+}
+
+
+status_t
+PPCVMTranslationMap::Init(bool kernel)
+{
+       fIsKernelMap = kernel;
+       return B_OK;
+}
+
+
+/*!    Acquires the map's recursive lock, and resets the invalidate pages 
counter
+       in case it's the first locking recursion.
+*/
+bool
+PPCVMTranslationMap::Lock()
+{
+       TRACE("%p->PPCVMTranslationMap::Lock()\n", this);
+
+       recursive_lock_lock(&fLock);
+       if (recursive_lock_get_recursion(&fLock) == 1) {
+               // we were the first one to grab the lock
+               TRACE("clearing invalidated page count\n");
+               fInvalidPagesCount = 0;
+       }
+
+       return true;
+}
+
+
+/*!    Unlocks the map, and, if we are actually losing the recursive lock,
+       flush all pending changes of this map (ie. flush TLB caches as
+       needed).
+*/
+void
+PPCVMTranslationMap::Unlock()
+{
+       TRACE("%p->PPCVMTranslationMap::Unlock()\n", this);
+
+       if (recursive_lock_get_recursion(&fLock) == 1) {
+               // we're about to release it for the last time
+               Flush();
+       }
+
+       recursive_lock_unlock(&fLock);
+}
+
+
+addr_t
+PPCVMTranslationMap::MappedSize() const
+{
+       return fMapCount;
+}
+
+
+void
+PPCVMTranslationMap::Flush()
+{
+       if (fInvalidPagesCount <= 0)
+               return;
+
+       Thread* thread = thread_get_current_thread();
+       thread_pin_to_current_cpu(thread);
+
+       if (fInvalidPagesCount > PAGE_INVALIDATE_CACHE_SIZE) {
+               // invalidate all pages
+               TRACE("flush_tmap: %d pages to invalidate, invalidate all\n",
+                       fInvalidPagesCount);
+
+               if (fIsKernelMap) {
+                       arch_cpu_global_TLB_invalidate();
+                       smp_send_broadcast_ici(SMP_MSG_GLOBAL_INVALIDATE_PAGES, 
0, 0, 0,
+                               NULL, SMP_MSG_FLAG_SYNC);
+               } else {
+                       cpu_status state = disable_interrupts();
+                       arch_cpu_user_TLB_invalidate();
+                       restore_interrupts(state);
+
+                       int cpu = smp_get_current_cpu();
+                       uint32 cpuMask = PagingStructures()->active_on_cpus
+                               & ~((uint32)1 << cpu);
+                       if (cpuMask != 0) {
+                               smp_send_multicast_ici(cpuMask, 
SMP_MSG_USER_INVALIDATE_PAGES,
+                                       0, 0, 0, NULL, SMP_MSG_FLAG_SYNC);
+                       }
+               }
+       } else {
+               TRACE("flush_tmap: %d pages to invalidate, invalidate list\n",
+                       fInvalidPagesCount);
+
+               arch_cpu_invalidate_TLB_list(fInvalidPages, fInvalidPagesCount);
+
+               if (fIsKernelMap) {
+                       smp_send_broadcast_ici(SMP_MSG_INVALIDATE_PAGE_LIST,
+                               (addr_t)fInvalidPages, fInvalidPagesCount, 0, 
NULL,
+                               SMP_MSG_FLAG_SYNC);
+               } else {
+                       int cpu = smp_get_current_cpu();
+                       uint32 cpuMask = PagingStructures()->active_on_cpus
+                               & ~((uint32)1 << cpu);
+                       if (cpuMask != 0) {
+                               smp_send_multicast_ici(cpuMask, 
SMP_MSG_INVALIDATE_PAGE_LIST,
+                                       (addr_t)fInvalidPages, 
fInvalidPagesCount, 0, NULL,
+                                       SMP_MSG_FLAG_SYNC);
+                       }
+               }
+       }
+       fInvalidPagesCount = 0;
+
+       thread_unpin_from_current_cpu(thread);
+}
diff --git a/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.h 
b/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.h
new file mode 100644
index 0000000..94b9fdc
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/PPCVMTranslationMap.h
@@ -0,0 +1,60 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+#ifndef KERNEL_ARCH_PPC_PPC_VM_TRANSLATION_MAP_H
+#define KERNEL_ARCH_PPC_PPC_VM_TRANSLATION_MAP_H
+
+
+#include <vm/VMTranslationMap.h>
+
+
+#define PAGE_INVALIDATE_CACHE_SIZE 64
+
+
+struct PPCPagingStructures;
+class TranslationMapPhysicalPageMapper;
+
+
+struct PPCVMTranslationMap : VMTranslationMap {
+                                                               
PPCVMTranslationMap();
+       virtual                                         ~PPCVMTranslationMap();
+
+                       status_t                        Init(bool kernel);
+
+       virtual bool                            Lock();
+       virtual void                            Unlock();
+
+       virtual addr_t                          MappedSize() const;
+
+       virtual void                            Flush();
+
+       virtual PPCPagingStructures* PagingStructures() const = 0;
+
+       inline  void                            InvalidatePage(addr_t address);
+
+       virtual status_t                        RemapAddressRange(addr_t 
*_virtualAddress,
+                                                                       size_t 
size, bool unmap) = 0;
+
+
+       virtual void                            ChangeASID() = 0;
+
+protected:
+                       //X86:TranslationMapPhysicalPageMapper* fPageMapper;
+                       int                                     
fInvalidPagesCount;
+                       addr_t                          
fInvalidPages[PAGE_INVALIDATE_CACHE_SIZE];
+                       bool                            fIsKernelMap;
+};
+
+
+void
+PPCVMTranslationMap::InvalidatePage(addr_t address)
+{
+       if (fInvalidPagesCount < PAGE_INVALIDATE_CACHE_SIZE)
+               fInvalidPages[fInvalidPagesCount] = address;
+
+       fInvalidPagesCount++;
+}
+
+
+#endif // KERNEL_ARCH_PPC_PPC_VM_TRANSLATION_MAP_H
diff --git 
a/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.cpp 
b/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.cpp
new file mode 100644
index 0000000..95ab0b5
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.cpp
@@ -0,0 +1,423 @@
+/*
+ * Copyright 2008-2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2002-2007, Axel Dörfler, axeld@xxxxxxxxxxxxxxxx. All rights 
reserved.
+ * Distributed under the terms of the MIT License.
+ *
+ * Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
+ * Distributed under the terms of the NewOS License.
+ */
+
+
+#include "paging/classic/PPCPagingMethodClassic.h"
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <AutoDeleter.h>
+
+#include <arch/cpu.h>
+#include <arch_mmu.h>
+#include <arch_system_info.h>
+#include <boot/kernel_args.h>
+#include <int.h>
+#include <thread.h>
+#include <vm/vm.h>
+#include <vm/VMAddressSpace.h>
+
+#include "paging/classic/PPCPagingStructuresClassic.h"
+#include "paging/classic/PPCVMTranslationMapClassic.h"
+#include "generic_vm_physical_page_mapper.h"
+#include "generic_vm_physical_page_ops.h"
+#include "GenericVMPhysicalPageMapper.h"
+
+
+//#define TRACE_PPC_PAGING_METHOD_CLASSIC
+#ifdef TRACE_PPC_PAGING_METHOD_CLASSIC
+#      define TRACE(x...) dprintf(x)
+#else
+#      define TRACE(x...) ;
+#endif
+
+// 64 MB of iospace
+#define IOSPACE_SIZE (64*1024*1024)
+// We only have small (4 KB) pages. The only reason for choosing greater chunk
+// size is to keep the waste of memory limited, since the generic page mapper
+// allocates structures per physical/virtual chunk.
+// TODO: Implement a page mapper more suitable for small pages!
+#define IOSPACE_CHUNK_SIZE (16 * B_PAGE_SIZE)
+
+static addr_t sIOSpaceBase;
+
+
+static status_t
+map_iospace_chunk(addr_t va, phys_addr_t pa, uint32 flags)
+{
+       pa &= ~(B_PAGE_SIZE - 1); // make sure it's page aligned
+       va &= ~(B_PAGE_SIZE - 1); // make sure it's page aligned
+       if (va < sIOSpaceBase || va >= (sIOSpaceBase + IOSPACE_SIZE))
+               panic("map_iospace_chunk: passed invalid va 0x%lx\n", va);
+
+       // map the pages
+       return ppc_map_address_range(va, pa, IOSPACE_CHUNK_SIZE);
+}
+
+
+// #pragma mark - PPCPagingMethodClassic
+
+
+PPCPagingMethodClassic::PPCPagingMethodClassic()
+/*
+       :
+       fPageHole(NULL),
+       fPageHolePageDir(NULL),
+       fKernelPhysicalPageDirectory(0),
+       fKernelVirtualPageDirectory(NULL),
+       fPhysicalPageMapper(NULL),
+       fKernelPhysicalPageMapper(NULL)
+*/
+{
+}
+
+
+PPCPagingMethodClassic::~PPCPagingMethodClassic()
+{
+}
+
+
+status_t
+PPCPagingMethodClassic::Init(kernel_args* args,
+       VMPhysicalPageMapper** _physicalPageMapper)
+{
+       TRACE("PPCPagingMethodClassic::Init(): entry\n");
+
+       fPageTable = (page_table_entry_group *)args->arch_args.page_table.start;
+       fPageTableSize = args->arch_args.page_table.size;
+       fPageTableHashMask = fPageTableSize / sizeof(page_table_entry_group) - 
1;
+
+       // init physical page mapper
+       status_t error = generic_vm_physical_page_mapper_init(args,
+               map_iospace_chunk, &sIOSpaceBase, IOSPACE_SIZE, 
IOSPACE_CHUNK_SIZE);
+       if (error != B_OK)
+               return error;
+
+       new(&fPhysicalPageMapper) GenericVMPhysicalPageMapper;
+
+       *_physicalPageMapper = &fPhysicalPageMapper;
+       return B_OK;
+
+#if 0//X86
+       fKernelPhysicalPageDirectory = args->arch_args.phys_pgdir;
+       fKernelVirtualPageDirectory = (page_directory_entry*)(addr_t)
+               args->arch_args.vir_pgdir;
+
+#ifdef TRACE_PPC_PAGING_METHOD_CLASSIC
+       TRACE("page hole: %p, page dir: %p\n", fPageHole, fPageHolePageDir);
+       TRACE("page dir: %p (physical: %#" B_PRIx32 ")\n",
+               fKernelVirtualPageDirectory, fKernelPhysicalPageDirectory);
+#endif
+
+       PPCPagingStructuresClassic::StaticInit();
+
+       // create the initial pool for the physical page mapper
+       PhysicalPageSlotPool* pool
+               = new(&PhysicalPageSlotPool::sInitialPhysicalPagePool)
+                       PhysicalPageSlotPool;
+       status_t error = pool->InitInitial(args);
+       if (error != B_OK) {
+               panic("PPCPagingMethodClassic::Init(): Failed to create initial 
pool "
+                       "for physical page mapper!");
+               return error;
+       }
+
+       // create physical page mapper
+       large_memory_physical_page_ops_init(args, pool, fPhysicalPageMapper,
+               fKernelPhysicalPageMapper);
+               // TODO: Select the best page mapper!
+
+       // enable global page feature if available
+       if (x86_check_feature(IA32_FEATURE_PGE, FEATURE_COMMON)) {
+               // this prevents kernel pages from being flushed from TLB on
+               // context-switch
+               x86_write_cr4(x86_read_cr4() | IA32_CR4_GLOBAL_PAGES);
+       }
+
+       TRACE("PPCPagingMethodClassic::Init(): done\n");
+
+       *_physicalPageMapper = fPhysicalPageMapper;
+       return B_OK;
+#endif
+}
+
+
+status_t
+PPCPagingMethodClassic::InitPostArea(kernel_args* args)
+{
+
+       // If the page table doesn't lie within the kernel address space, we
+       // remap it.
+       if (!IS_KERNEL_ADDRESS(fPageTable)) {
+               addr_t newAddress = (addr_t)fPageTable;
+               status_t error = ppc_remap_address_range(&newAddress, 
fPageTableSize,
+                       false);
+               if (error != B_OK) {
+                       panic("arch_vm_translation_map_init_post_area(): Failed 
to remap "
+                               "the page table!");
+                       return error;
+               }
+
+               // set the new page table address
+               addr_t oldVirtualBase = (addr_t)(fPageTable);
+               fPageTable = (page_table_entry_group*)newAddress;
+
+               // unmap the old pages
+               ppc_unmap_address_range(oldVirtualBase, fPageTableSize);
+
+// TODO: We should probably map the page table via BAT. It is relatively large,
+// and due to being a hash table the access patterns might look sporadic, which
+// certainly isn't to the liking of the TLB.
+       }
+
+       // create an area to cover the page table
+       fPageTableArea = create_area("page_table", (void **)&fPageTable, 
B_EXACT_ADDRESS,
+               fPageTableSize, B_ALREADY_WIRED, B_KERNEL_READ_AREA | 
B_KERNEL_WRITE_AREA);
+
+       // init physical page mapper
+       status_t error = generic_vm_physical_page_mapper_init_post_area(args);
+       if (error != B_OK)
+               return error;
+
+       return B_OK;
+
+#if 0//X86
+       // now that the vm is initialized, create an area that represents
+       // the page hole
+       void *temp;
+       status_t error;
+       area_id area;
+
+       // unmap the page hole hack we were using before
+       fKernelVirtualPageDirectory[1023] = 0;
+       fPageHolePageDir = NULL;
+       fPageHole = NULL;
+
+       temp = (void*)fKernelVirtualPageDirectory;
+       area = create_area("kernel_pgdir", &temp, B_EXACT_ADDRESS, B_PAGE_SIZE,
+               B_ALREADY_WIRED, B_KERNEL_READ_AREA | B_KERNEL_WRITE_AREA);
+       if (area < B_OK)
+               return area;
+
+       error = PhysicalPageSlotPool::sInitialPhysicalPagePool
+               .InitInitialPostArea(args);
+       if (error != B_OK)
+               return error;
+
+       return B_OK;
+#endif//X86
+}
+
+
+status_t
+PPCPagingMethodClassic::CreateTranslationMap(bool kernel, VMTranslationMap** 
_map)
+{
+       PPCVMTranslationMapClassic* map = new(std::nothrow) 
PPCVMTranslationMapClassic;
+       if (map == NULL)
+               return B_NO_MEMORY;
+
+       status_t error = map->Init(kernel);
+       if (error != B_OK) {
+               delete map;
+               return error;
+       }
+
+       *_map = map;
+       return B_OK;
+}
+
+
+status_t
+PPCPagingMethodClassic::MapEarly(kernel_args* args, addr_t virtualAddress,
+       phys_addr_t physicalAddress, uint8 attributes,
+       page_num_t (*get_free_page)(kernel_args*))
+{
+       uint32 virtualSegmentID = get_sr((void *)virtualAddress) & 0xffffff;
+
+       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
(uint32)virtualAddress);
+       page_table_entry_group *group = &fPageTable[hash & fPageTableHashMask];
+
+       for (int32 i = 0; i < 8; i++) {
+               // 8 entries in a group
+               if (group->entry[i].valid)
+                       continue;
+
+               FillPageTableEntry(&group->entry[i], virtualSegmentID,
+                       virtualAddress, physicalAddress, PTE_READ_WRITE, 0, 
false);
+               return B_OK;
+       }
+
+       hash = page_table_entry::SecondaryHash(hash);
+       group = &fPageTable[hash & fPageTableHashMask];
+
+       for (int32 i = 0; i < 8; i++) {
+               if (group->entry[i].valid)
+                       continue;
+
+               FillPageTableEntry(&group->entry[i], virtualSegmentID,
+                       virtualAddress, physicalAddress, PTE_READ_WRITE, 0, 
true);
+               return B_OK;
+       }
+
+       return B_ERROR;
+}
+
+
+bool
+PPCPagingMethodClassic::IsKernelPageAccessible(addr_t virtualAddress,
+       uint32 protection)
+{
+       // TODO:factor out to baseclass
+       VMAddressSpace *addressSpace = VMAddressSpace::Kernel();
+
+//XXX:
+//     PPCVMTranslationMap* map = static_cast<PPCVMTranslationMap*>(
+//             addressSpace->TranslationMap());
+//     VMTranslationMap* map = addressSpace->TranslationMap();
+       PPCVMTranslationMapClassic* map = 
static_cast<PPCVMTranslationMapClassic*>(
+               addressSpace->TranslationMap());
+
+       phys_addr_t physicalAddress;
+       uint32 flags;
+       if (map->Query(virtualAddress, &physicalAddress, &flags) != B_OK)
+               return false;
+
+       if ((flags & PAGE_PRESENT) == 0)
+               return false;
+
+       // present means kernel-readable, so check for writable
+       return (protection & B_KERNEL_WRITE_AREA) == 0
+               || (flags & B_KERNEL_WRITE_AREA) != 0;
+}
+
+
+void
+PPCPagingMethodClassic::FillPageTableEntry(page_table_entry *entry,
+       uint32 virtualSegmentID, addr_t virtualAddress, phys_addr_t 
physicalAddress,
+       uint8 protection, uint32 memoryType, bool secondaryHash)
+{
+       // lower 32 bit - set at once
+       entry->physical_page_number = physicalAddress / B_PAGE_SIZE;
+       entry->_reserved0 = 0;
+       entry->referenced = false;
+       entry->changed = false;
+       entry->write_through = (memoryType == B_MTR_UC) || (memoryType == 
B_MTR_WT);
+       entry->caching_inhibited = (memoryType == B_MTR_UC);
+       entry->memory_coherent = false;
+       entry->guarded = false;
+       entry->_reserved1 = 0;
+       entry->page_protection = protection & 0x3;
+       eieio();
+               // we need to make sure that the lower 32 bit were
+               // already written when the entry becomes valid
+
+       // upper 32 bit
+       entry->virtual_segment_id = virtualSegmentID;
+       entry->secondary_hash = secondaryHash;
+       entry->abbr_page_index = (virtualAddress >> 22) & 0x3f;
+       entry->valid = true;
+
+       ppc_sync();
+}
+
+
+#if 0//X86
+/*static*/ void
+PPCPagingMethodClassic::PutPageTableInPageDir(page_directory_entry* entry,
+       phys_addr_t pgtablePhysical, uint32 attributes)
+{
+       *entry = (pgtablePhysical & PPC_PDE_ADDRESS_MASK)
+               | PPC_PDE_PRESENT
+               | PPC_PDE_WRITABLE
+               | PPC_PDE_USER;
+               // TODO: we ignore the attributes of the page table - for 
compatibility
+               // with BeOS we allow having user accessible areas in the 
kernel address
+               // space. This is currently being used by some drivers, mainly 
for the
+               // frame buffer. Our current real time data implementation 
makes use of
+               // this fact, too.
+               // We might want to get rid of this possibility one day, 
especially if
+               // we intend to port it to a platform that does not support 
this.
+}
+
+
+/*static*/ void
+PPCPagingMethodClassic::PutPageTableEntryInTable(page_table_entry* entry,
+       phys_addr_t physicalAddress, uint32 attributes, uint32 memoryType,
+       bool globalPage)
+{
+       page_table_entry page = (physicalAddress & PPC_PTE_ADDRESS_MASK)
+               | PPC_PTE_PRESENT | (globalPage ? PPC_PTE_GLOBAL : 0)
+               | MemoryTypeToPageTableEntryFlags(memoryType);
+
+       // if the page is user accessible, it's automatically
+       // accessible in kernel space, too (but with the same
+       // protection)
+       if ((attributes & B_USER_PROTECTION) != 0) {
+               page |= PPC_PTE_USER;
+               if ((attributes & B_WRITE_AREA) != 0)
+                       page |= PPC_PTE_WRITABLE;
+       } else if ((attributes & B_KERNEL_WRITE_AREA) != 0)
+               page |= PPC_PTE_WRITABLE;
+
+       // put it in the page table
+       *(volatile page_table_entry*)entry = page;
+}
+
+
+/*static*/ void
+PPCPagingMethodClassic::_EarlyPreparePageTables(page_table_entry* pageTables,
+       addr_t address, size_t size)
+{
+       memset(pageTables, 0, B_PAGE_SIZE * (size / (B_PAGE_SIZE * 1024)));
+
+       // put the array of pgtables directly into the kernel pagedir
+       // these will be wired and kept mapped into virtual space to be easy to 
get
+       // to
+       {
+               addr_t virtualTable = (addr_t)pageTables;
+
+               page_directory_entry* pageHolePageDir
+                       = PPCPagingMethodClassic::Method()->PageHolePageDir();
+
+               for (size_t i = 0; i < (size / (B_PAGE_SIZE * 1024));
+                               i++, virtualTable += B_PAGE_SIZE) {
+                       phys_addr_t physicalTable = 0;
+                       _EarlyQuery(virtualTable, &physicalTable);
+                       page_directory_entry* entry = &pageHolePageDir[
+                               (address / (B_PAGE_SIZE * 1024)) + i];
+                       PutPageTableInPageDir(entry, physicalTable,
+                               B_KERNEL_READ_AREA | B_KERNEL_WRITE_AREA);
+               }
+       }
+}
+
+
+//! TODO: currently assumes this translation map is active
+/*static*/ status_t
+PPCPagingMethodClassic::_EarlyQuery(addr_t virtualAddress,
+       phys_addr_t *_physicalAddress)
+{
+       PPCPagingMethodClassic* method = PPCPagingMethodClassic::Method();
+       int index = VADDR_TO_PDENT(virtualAddress);
+       if ((method->PageHolePageDir()[index] & PPC_PDE_PRESENT) == 0) {
+               // no pagetable here
+               return B_ERROR;
+       }
+
+       page_table_entry* entry = method->PageHole() + virtualAddress / 
B_PAGE_SIZE;
+       if ((*entry & PPC_PTE_PRESENT) == 0) {
+               // page mapping not valid
+               return B_ERROR;
+       }
+
+       *_physicalAddress = *entry & PPC_PTE_ADDRESS_MASK;
+       return B_OK;
+}
+#endif
diff --git a/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.h 
b/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.h
new file mode 100644
index 0000000..5f57b8b
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/classic/PPCPagingMethodClassic.h
@@ -0,0 +1,208 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+#ifndef KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_METHOD_CLASSIC_H
+#define KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_METHOD_CLASSIC_H
+
+
+#include <arch_mmu.h>
+//#include "paging/classic/paging.h"
+#include "paging/PPCPagingMethod.h"
+#include "paging/PPCPagingStructures.h"
+#include "GenericVMPhysicalPageMapper.h"
+
+
+class TranslationMapPhysicalPageMapper;
+
+
+class PPCPagingMethodClassic : public PPCPagingMethod {
+public:
+                                                               
PPCPagingMethodClassic();
+       virtual                                         
~PPCPagingMethodClassic();
+
+       virtual status_t                        Init(kernel_args* args,
+                                                                       
VMPhysicalPageMapper** _physicalPageMapper);
+       virtual status_t                        InitPostArea(kernel_args* args);
+
+       virtual status_t                        CreateTranslationMap(bool 
kernel,
+                                                                       
VMTranslationMap** _map);
+
+       virtual status_t                        MapEarly(kernel_args* args,
+                                                                       addr_t 
virtualAddress,
+                                                                       
phys_addr_t physicalAddress,
+                                                                       uint8 
attributes,
+                                                                       
page_num_t (*get_free_page)(kernel_args*));
+
+       virtual bool                            IsKernelPageAccessible(addr_t 
virtualAddress,
+                                                                       uint32 
protection);
+#if 0//X86
+       inline  page_table_entry*       PageHole() const
+                                                                       { 
return fPageHole; }
+       inline  page_directory_entry* PageHolePageDir() const
+                                                                       { 
return fPageHolePageDir; }
+       inline  uint32                          KernelPhysicalPageDirectory() 
const
+                                                                       { 
return fKernelPhysicalPageDirectory; }
+       inline  page_directory_entry* KernelVirtualPageDirectory() const
+                                                                       { 
return fKernelVirtualPageDirectory; }
+       inline  PPCPhysicalPageMapper* PhysicalPageMapper() const
+                                                                       { 
return fPhysicalPageMapper; }
+       inline  TranslationMapPhysicalPageMapper* KernelPhysicalPageMapper() 
const
+                                                                       { 
return fKernelPhysicalPageMapper; }
+#endif
+
+       inline  page_table_entry_group* PageTable() const
+                                                                       { 
return fPageTable; }
+       inline  size_t                          PageTableSize() const
+                                                                       { 
return fPageTableSize; }
+       inline  uint32                          PageTableHashMask() const
+                                                                       { 
return fPageTableHashMask; }
+
+       static  PPCPagingMethodClassic* Method();
+
+       void                                            
FillPageTableEntry(page_table_entry *entry,
+                                                                       uint32 
virtualSegmentID,
+                                                                       addr_t 
virtualAddress,
+                                                                       
phys_addr_t physicalAddress,
+                                                                       uint8 
protection, uint32 memoryType,
+                                                                       bool 
secondaryHash);
+
+
+#if 0//X86
+       static  void                            PutPageTableInPageDir(
+                                                                       
page_directory_entry* entry,
+                                                                       
phys_addr_t pgtablePhysical,
+                                                                       uint32 
attributes);
+       static  void                            PutPageTableEntryInTable(
+                                                                       
page_table_entry* entry,
+                                                                       
phys_addr_t physicalAddress,
+                                                                       uint32 
attributes, uint32 memoryType,
+                                                                       bool 
globalPage);
+       static  page_table_entry        SetPageTableEntry(page_table_entry* 
entry,
+                                                                       
page_table_entry newEntry);
+       static  page_table_entry        
SetPageTableEntryFlags(page_table_entry* entry,
+                                                                       uint32 
flags);
+       static  page_table_entry        TestAndSetPageTableEntry(
+                                                                       
page_table_entry* entry,
+                                                                       
page_table_entry newEntry,
+                                                                       
page_table_entry oldEntry);
+       static  page_table_entry        ClearPageTableEntry(page_table_entry* 
entry);
+       static  page_table_entry        ClearPageTableEntryFlags(
+                                                                       
page_table_entry* entry, uint32 flags);
+
+       static  uint32                          MemoryTypeToPageTableEntryFlags(
+                                                                       uint32 
memoryType);
+#endif
+
+private:
+                       //XXX:x86
+                       struct PhysicalPageSlotPool;
+                       friend struct PhysicalPageSlotPool;
+
+private:
+#if 0//X86
+       static  void                            _EarlyPreparePageTables(
+                                                                       
page_table_entry* pageTables,
+                                                                       addr_t 
address, size_t size);
+       static  status_t                        _EarlyQuery(addr_t 
virtualAddress,
+                                                                       
phys_addr_t *_physicalAddress);
+#endif
+
+private:
+                       struct page_table_entry_group *fPageTable;
+                       size_t                          fPageTableSize;
+                       uint32                          fPageTableHashMask;
+                       area_id                         fPageTableArea;
+
+                       GenericVMPhysicalPageMapper     fPhysicalPageMapper;
+
+
+#if 0                  //XXX:x86
+                       page_table_entry*       fPageHole;
+                       page_directory_entry* fPageHolePageDir;
+                       uint32                          
fKernelPhysicalPageDirectory;
+                       page_directory_entry* fKernelVirtualPageDirectory;
+                       PPCPhysicalPageMapper* fPhysicalPageMapper;
+                       TranslationMapPhysicalPageMapper* 
fKernelPhysicalPageMapper;
+#endif
+};
+
+
+/*static*/ inline PPCPagingMethodClassic*
+PPCPagingMethodClassic::Method()
+{
+       return static_cast<PPCPagingMethodClassic*>(gPPCPagingMethod);
+}
+
+
+#if 0//X86
+/*static*/ inline page_table_entry
+PPCPagingMethodClassic::SetPageTableEntry(page_table_entry* entry,
+       page_table_entry newEntry)
+{
+       return atomic_set((int32*)entry, newEntry);
+}
+
+
+/*static*/ inline page_table_entry
+PPCPagingMethodClassic::SetPageTableEntryFlags(page_table_entry* entry,
+       uint32 flags)
+{
+       return atomic_or((int32*)entry, flags);
+}
+
+
+/*static*/ inline page_table_entry
+PPCPagingMethodClassic::TestAndSetPageTableEntry(page_table_entry* entry,
+       page_table_entry newEntry, page_table_entry oldEntry)
+{
+       return atomic_test_and_set((int32*)entry, newEntry, oldEntry);
+}
+
+
+/*static*/ inline page_table_entry
+PPCPagingMethodClassic::ClearPageTableEntry(page_table_entry* entry)
+{
+       return SetPageTableEntry(entry, 0);
+}
+
+
+/*static*/ inline page_table_entry
+PPCPagingMethodClassic::ClearPageTableEntryFlags(page_table_entry* entry, 
uint32 flags)
+{
+       return atomic_and((int32*)entry, ~flags);
+}
+
+
+/*static*/ inline uint32
+PPCPagingMethodClassic::MemoryTypeToPageTableEntryFlags(uint32 memoryType)
+{
+       // ATM we only handle the uncacheable and write-through type 
explicitly. For
+       // all other types we rely on the MTRRs to be set up correctly. Since 
we set
+       // the default memory type to write-back and since the uncacheable type 
in
+       // the PTE overrides any MTRR attribute (though, as per the specs, that 
is
+       // not recommended for performance reasons), this reduces the work we
+       // actually *have* to do with the MTRRs to setting the remaining types
+       // (usually only write-combining for the frame buffer).
+       switch (memoryType) {
+               case B_MTR_UC:
+                       return PPC_PTE_CACHING_DISABLED | PPC_PTE_WRITE_THROUGH;
+
+               case B_MTR_WC:
+                       // PPC_PTE_WRITE_THROUGH would be closer, but the 
combination with
+                       // MTRR WC is "implementation defined" for Pentium 
Pro/II.
+                       return 0;
+
+               case B_MTR_WT:
+                       return PPC_PTE_WRITE_THROUGH;
+
+               case B_MTR_WP:
+               case B_MTR_WB:
+               default:
+                       return 0;
+       }
+}
+#endif//X86
+
+
+#endif // KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_METHOD_CLASSIC_H
diff --git 
a/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.cpp 
b/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.cpp
new file mode 100644
index 0000000..79a35ba
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.cpp
@@ -0,0 +1,141 @@
+/*
+ * Copyright 2008-2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2002-2007, Axel Dörfler, axeld@xxxxxxxxxxxxxxxx. All rights 
reserved.
+ * Distributed under the terms of the MIT License.
+ *
+ * Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
+ * Distributed under the terms of the NewOS License.
+ */
+
+
+#include "paging/classic/PPCPagingStructuresClassic.h"
+
+#include <stdlib.h>
+
+#include <heap.h>
+#include <util/AutoLock.h>
+
+
+// Accessor class to reuse the SinglyLinkedListLink of DeferredDeletable for
+// PPCPagingStructuresClassic.
+struct PagingStructuresGetLink {
+private:
+       typedef SinglyLinkedListLink<PPCPagingStructuresClassic> Link;
+
+public:
+       inline Link* operator()(PPCPagingStructuresClassic* element) const
+       {
+               return (Link*)element->GetSinglyLinkedListLink();
+       }
+
+       inline const Link* operator()(
+               const PPCPagingStructuresClassic* element) const
+       {
+               return (const Link*)element->GetSinglyLinkedListLink();
+       }
+};
+
+
+typedef SinglyLinkedList<PPCPagingStructuresClassic, PagingStructuresGetLink>
+       PagingStructuresList;
+
+
+static PagingStructuresList sPagingStructuresList;
+static spinlock sPagingStructuresListLock;
+
+
+PPCPagingStructuresClassic::PPCPagingStructuresClassic()
+/*     :
+       pgdir_virt(NULL)*/
+{
+}
+
+
+PPCPagingStructuresClassic::~PPCPagingStructuresClassic()
+{
+#if 0//X86
+       // free the page dir
+       free(pgdir_virt);
+#endif
+}
+
+
+void
+PPCPagingStructuresClassic::Init(/*page_directory_entry* virtualPageDir,
+       phys_addr_t physicalPageDir, page_directory_entry* kernelPageDir*/
+       page_table_entry_group *pageTable)
+{
+//     pgdir_virt = virtualPageDir;
+//     pgdir_phys = physicalPageDir;
+
+#if 0//X86
+       // zero out the bottom portion of the new pgdir
+       memset(pgdir_virt + FIRST_USER_PGDIR_ENT, 0,
+               NUM_USER_PGDIR_ENTS * sizeof(page_directory_entry));
+#endif
+
+       // insert this new map into the map list
+       {
+               int state = disable_interrupts();
+               acquire_spinlock(&sPagingStructuresListLock);
+
+#if 0//X86
+               // copy the top portion of the page dir from the kernel page dir
+               if (kernelPageDir != NULL) {
+                       memcpy(pgdir_virt + FIRST_KERNEL_PGDIR_ENT,
+                               kernelPageDir + FIRST_KERNEL_PGDIR_ENT,
+                               NUM_KERNEL_PGDIR_ENTS * 
sizeof(page_directory_entry));
+               }
+#endif
+
+               sPagingStructuresList.Add(this);
+
+               release_spinlock(&sPagingStructuresListLock);
+               restore_interrupts(state);
+       }
+}
+
+
+void
+PPCPagingStructuresClassic::Delete()
+{
+       // remove from global list
+       InterruptsSpinLocker locker(sPagingStructuresListLock);
+       sPagingStructuresList.Remove(this);
+       locker.Unlock();
+
+#if 0
+       // this sanity check can be enabled when corruption due to
+       // overwriting an active page directory is suspected
+       uint32 activePageDirectory = x86_read_cr3();
+       if (activePageDirectory == pgdir_phys)
+               panic("deleting a still active page directory\n");
+#endif
+
+       if (are_interrupts_enabled())
+               delete this;
+       else
+               deferred_delete(this);
+}
+
+
+/*static*/ void
+PPCPagingStructuresClassic::StaticInit()
+{
+       B_INITIALIZE_SPINLOCK(&sPagingStructuresListLock);
+       new (&sPagingStructuresList) PagingStructuresList;
+}
+
+
+/*static*/ void
+PPCPagingStructuresClassic::UpdateAllPageDirs(int index,
+       page_table_entry_group entry)
+//XXX:page_table_entry?
+{
+       InterruptsSpinLocker locker(sPagingStructuresListLock);
+#if 0//X86
+       PagingStructuresList::Iterator it = sPagingStructuresList.GetIterator();
+       while (PPCPagingStructuresClassic* info = it.Next())
+               info->pgdir_virt[index] = entry;
+#endif
+}
diff --git 
a/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.h 
b/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.h
new file mode 100644
index 0000000..71d5d9e
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/classic/PPCPagingStructuresClassic.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright 2010, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Distributed under the terms of the MIT License.
+ */
+#ifndef KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_STRUCTURES_CLASSIC_H
+#define KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_STRUCTURES_CLASSIC_H
+
+
+//#include "paging/classic/paging.h"
+#include <arch_mmu.h>
+#include "paging/PPCPagingStructures.h"
+
+
+struct PPCPagingStructuresClassic : PPCPagingStructures {
+//     page_directory_entry*           pgdir_virt;
+
+                                                               
PPCPagingStructuresClassic();
+       virtual                                         
~PPCPagingStructuresClassic();
+
+                       void                            
Init(/*page_directory_entry* virtualPageDir,
+                                                                        
phys_addr_t physicalPageDir,
+                                                                        
page_directory_entry* kernelPageDir,*/
+                                                                       
page_table_entry_group *pageTable);
+
+       virtual void                            Delete();
+
+       static  void                            StaticInit();
+       static  void                            UpdateAllPageDirs(int index,
+                                                                       
page_table_entry_group entry);
+};
+
+
+#endif // KERNEL_ARCH_PPC_PAGING_CLASSIC_PPC_PAGING_STRUCTURES_CLASSIC_H
diff --git 
a/src/system/kernel/arch/ppc/paging/classic/PPCVMTranslationMapClassic.cpp 
b/src/system/kernel/arch/ppc/paging/classic/PPCVMTranslationMapClassic.cpp
new file mode 100644
index 0000000..253bbc4
--- /dev/null
+++ b/src/system/kernel/arch/ppc/paging/classic/PPCVMTranslationMapClassic.cpp
@@ -0,0 +1,1363 @@
+/*
+ * Copyright 2008-2011, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2002-2007, Axel Dörfler, axeld@xxxxxxxxxxxxxxxx. All rights 
reserved.
+ * Distributed under the terms of the MIT License.
+ *
+ * Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
+ * Distributed under the terms of the NewOS License.
+ */
+
+/*     (bonefish) Some explanatory words on how address translation is 
implemented
+       for the 32 bit PPC architecture.
+
+       I use the address type nomenclature as used in the PPC architecture
+       specs, i.e.
+       - effective address: An address as used by program instructions, i.e.
+         that's what elsewhere (e.g. in the VM implementation) is called
+         virtual address.
+       - virtual address: An intermediate address computed from the effective
+         address via the segment registers.
+       - physical address: An address referring to physical storage.
+
+       The hardware translates an effective address to a physical address using
+       either of two mechanisms: 1) Block Address Translation (BAT) or
+       2) segment + page translation. The first mechanism does this directly
+       using two sets (for data/instructions) of special purpose registers.
+       The latter mechanism is of more relevance here, though:
+
+       effective address (32 bit):          [ 0 ESID  3 | 4  PIX 19 | 20 Byte 
31 ]
+                                                                          |    
       |            |
+                                                            (segment 
registers)   |            |
+                                                                              
|           |            |
+       virtual address (52 bit):   [ 0      VSID 23 | 24 PIX 39 | 40 Byte 51 ]
+                                   [ 0             VPN       39 | 40 Byte 51 ]
+                                                                               
 |                  |
+                                                                               
   (page table)             |
+                                                                               
             |                  |
+       physical address (32 bit):       [ 0        PPN       19 | 20 Byte 31 ]
+
+
+       ESID: Effective Segment ID
+       VSID: Virtual Segment ID
+       PIX:  Page Index
+       VPN:  Virtual Page Number
+       PPN:  Physical Page Number
+
+
+       Unlike on x86 we can't just switch the context to another team by just
+       setting a register to another page directory, since we only have one
+       page table containing both kernel and user address mappings. Instead we
+       map the effective address space of kernel and *all* teams
+       non-intersectingly into the virtual address space (which fortunately is
+       20 bits wider), and use the segment registers to select the section of
+       the virtual address space for the current team. Half of the 16 segment
+       registers (8 - 15) map the kernel addresses, so they remain unchanged.
+
+       The range of the virtual address space a team's effective address space
+       is mapped to is defined by its PPCVMTranslationMap::fVSIDBase,
+       which is the first of the 8 successive VSID values used for the team.
+
+       Which fVSIDBase values are already taken is defined by the set bits in
+       the bitmap sVSIDBaseBitmap.
+
+
+       TODO:
+       * If we want to continue to use the OF services, we would need to add
+         its address mappings to the kernel space. Unfortunately some stuff
+         (especially RAM) is mapped in an address range without the kernel
+         address space. We probably need to map those into each team's address
+         space as kernel read/write areas.
+       * The current locking scheme is insufficient. The page table is a 
resource
+         shared by all teams. We need to synchronize access to it. Probably 
via a
+         spinlock.
+ */
+
+#include "paging/classic/PPCVMTranslationMapClassic.h"
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <arch/cpu.h>
+#include <arch_mmu.h>
+#include <int.h>
+#include <thread.h>
+#include <slab/Slab.h>
+#include <smp.h>
+#include <util/AutoLock.h>
+#include <util/queue.h>
+#include <vm/vm_page.h>
+#include <vm/vm_priv.h>
+#include <vm/VMAddressSpace.h>
+#include <vm/VMCache.h>
+
+#include "paging/classic/PPCPagingMethodClassic.h"
+#include "paging/classic/PPCPagingStructuresClassic.h"
+#include "generic_vm_physical_page_mapper.h"
+#include "generic_vm_physical_page_ops.h"
+#include "GenericVMPhysicalPageMapper.h"
+
+
+//#define TRACE_PPC_VM_TRANSLATION_MAP_CLASSIC
+#ifdef TRACE_PPC_VM_TRANSLATION_MAP_CLASSIC
+#      define TRACE(x...) dprintf(x)
+#else
+#      define TRACE(x...) ;
+#endif
+
+
+// The VSID is a 24 bit number. The lower three bits are defined by the
+// (effective) segment number, which leaves us with a 21 bit space of
+// VSID bases (= 2 * 1024 * 1024).
+#define MAX_VSID_BASES (PAGE_SIZE * 8)
+static uint32 sVSIDBaseBitmap[MAX_VSID_BASES / (sizeof(uint32) * 8)];
+static spinlock sVSIDBaseBitmapLock;
+
+#define VSID_BASE_SHIFT 3
+#define VADDR_TO_VSID(vsidBase, vaddr) (vsidBase + ((vaddr) >> 28))
+
+
+// #pragma mark -
+
+
+PPCVMTranslationMapClassic::PPCVMTranslationMapClassic()
+       :
+       fPagingStructures(NULL)
+{
+}
+
+
+PPCVMTranslationMapClassic::~PPCVMTranslationMapClassic()
+{
+       if (fPagingStructures == NULL)
+               return;
+
+#if 0//X86
+       if (fPageMapper != NULL)
+               fPageMapper->Delete();
+#endif
+
+       if (fMapCount > 0) {
+               panic("vm_translation_map.destroy_tmap: map %p has positive map 
count %ld\n",
+                       this, fMapCount);
+       }
+
+       // mark the vsid base not in use
+       int baseBit = fVSIDBase >> VSID_BASE_SHIFT;
+       atomic_and((int32 *)&sVSIDBaseBitmap[baseBit / 32],
+                       ~(1 << (baseBit % 32)));
+
+#if 0//X86
+       if (fPagingStructures->pgdir_virt != NULL) {
+               // cycle through and free all of the user space pgtables
+               for (uint32 i = VADDR_TO_PDENT(USER_BASE);
+                               i <= VADDR_TO_PDENT(USER_BASE + (USER_SIZE - 
1)); i++) {
+                       if ((fPagingStructures->pgdir_virt[i] & 
PPC_PDE_PRESENT) != 0) {
+                               addr_t address = 
fPagingStructures->pgdir_virt[i]
+                                       & PPC_PDE_ADDRESS_MASK;
+                               vm_page* page = vm_lookup_page(address / 
B_PAGE_SIZE);
+                               if (!page)
+                                       panic("destroy_tmap: didn't find 
pgtable page\n");
+                               DEBUG_PAGE_ACCESS_START(page);
+                               vm_page_set_state(page, PAGE_STATE_FREE);
+                       }
+               }
+       }
+#endif
+
+       fPagingStructures->RemoveReference();
+}
+
+
+status_t
+PPCVMTranslationMapClassic::Init(bool kernel)
+{
+       TRACE("PPCVMTranslationMapClassic::Init()\n");
+
+       PPCVMTranslationMap::Init(kernel);
+
+       cpu_status state = disable_interrupts();
+       acquire_spinlock(&sVSIDBaseBitmapLock);
+
+       // allocate a VSID base for this one
+       if (kernel) {
+               // The boot loader has set up the segment registers for 
identical
+               // mapping. Two VSID bases are reserved for the kernel: 0 and 
8. The
+               // latter one for mapping the kernel address space 
(0x80000000...), the
+               // former one for the lower addresses required by the Open 
Firmware
+               // services.
+               fVSIDBase = 0;
+               sVSIDBaseBitmap[0] |= 0x3;
+       } else {
+               int i = 0;
+
+               while (i < MAX_VSID_BASES) {
+                       if (sVSIDBaseBitmap[i / 32] == 0xffffffff) {
+                               i += 32;
+                               continue;
+                       }
+                       if ((sVSIDBaseBitmap[i / 32] & (1 << (i % 32))) == 0) {
+                               // we found it
+                               sVSIDBaseBitmap[i / 32] |= 1 << (i % 32);
+                               break;
+                       }
+                       i++;
+               }
+               if (i >= MAX_VSID_BASES)
+                       panic("vm_translation_map_create: out of VSID bases\n");
+               fVSIDBase = i << VSID_BASE_SHIFT;
+       }
+
+       release_spinlock(&sVSIDBaseBitmapLock);
+       restore_interrupts(state);
+
+       fPagingStructures = new(std::nothrow) PPCPagingStructuresClassic;
+       if (fPagingStructures == NULL)
+               return B_NO_MEMORY;
+
+       PPCPagingMethodClassic* method = PPCPagingMethodClassic::Method();
+
+       if (!kernel) {
+               // user
+#if 0//X86
+               // allocate a physical page mapper
+               status_t error = method->PhysicalPageMapper()
+                       ->CreateTranslationMapPhysicalPageMapper(&fPageMapper);
+               if (error != B_OK)
+                       return error;
+#endif
+#if 0//X86
+               // allocate the page directory
+               page_directory_entry* virtualPageDir = 
(page_directory_entry*)memalign(
+                       B_PAGE_SIZE, B_PAGE_SIZE);
+               if (virtualPageDir == NULL)
+                       return B_NO_MEMORY;
+
+               // look up the page directory's physical address
+               phys_addr_t physicalPageDir;
+               vm_get_page_mapping(VMAddressSpace::KernelID(),
+                       (addr_t)virtualPageDir, &physicalPageDir);
+#endif
+
+               fPagingStructures->Init(/*NULL, 0,
+                       
method->KernelVirtualPageDirectory()*/method->PageTable());
+       } else {
+               // kernel
+#if 0//X86
+               // get the physical page mapper
+               fPageMapper = method->KernelPhysicalPageMapper();
+#endif
+
+               // we already know the kernel pgdir mapping
+               fPagingStructures->Init(/*method->KernelVirtualPageDirectory(),
+                       method->KernelPhysicalPageDirectory(), 
NULL*/method->PageTable());
+       }
+
+       return B_OK;
+}
+
+
+void
+PPCVMTranslationMapClassic::ChangeASID()
+{
+// this code depends on the kernel being at 0x80000000, fix if we change that
+#if KERNEL_BASE != 0x80000000
+#error fix me
+#endif
+       int vsidBase = VSIDBase();
+
+       isync();        // synchronize context
+       asm("mtsr       0,%0" : : "g"(vsidBase));
+       asm("mtsr       1,%0" : : "g"(vsidBase + 1));
+       asm("mtsr       2,%0" : : "g"(vsidBase + 2));
+       asm("mtsr       3,%0" : : "g"(vsidBase + 3));
+       asm("mtsr       4,%0" : : "g"(vsidBase + 4));
+       asm("mtsr       5,%0" : : "g"(vsidBase + 5));
+       asm("mtsr       6,%0" : : "g"(vsidBase + 6));
+       asm("mtsr       7,%0" : : "g"(vsidBase + 7));
+       isync();        // synchronize context
+}
+
+
+page_table_entry *
+PPCVMTranslationMapClassic::LookupPageTableEntry(addr_t virtualAddress)
+{
+       // lookup the vsid based off the va
+       uint32 virtualSegmentID = VADDR_TO_VSID(fVSIDBase, virtualAddress);
+
+//     dprintf("vm_translation_map.lookup_page_table_entry: vsid %ld, va 
0x%lx\n", virtualSegmentID, virtualAddress);
+
+       PPCPagingMethodClassic* m = PPCPagingMethodClassic::Method();
+
+       // Search for the page table entry using the primary hash value
+
+       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
virtualAddress);
+       page_table_entry_group *group = &(m->PageTable())[hash & 
m->PageTableHashMask()];
+
+       for (int i = 0; i < 8; i++) {
+               page_table_entry *entry = &group->entry[i];
+
+               if (entry->virtual_segment_id == virtualSegmentID
+                       && entry->secondary_hash == false
+                       && entry->abbr_page_index == ((virtualAddress >> 22) & 
0x3f))
+                       return entry;
+       }
+
+       // didn't find it, try the secondary hash value
+
+       hash = page_table_entry::SecondaryHash(hash);
+       group = &(m->PageTable())[hash & m->PageTableHashMask()];
+
+       for (int i = 0; i < 8; i++) {
+               page_table_entry *entry = &group->entry[i];
+
+               if (entry->virtual_segment_id == virtualSegmentID
+                       && entry->secondary_hash == true
+                       && entry->abbr_page_index == ((virtualAddress >> 22) & 
0x3f))
+                       return entry;
+       }
+
+       return NULL;
+}
+
+
+bool
+PPCVMTranslationMapClassic::RemovePageTableEntry(addr_t virtualAddress)
+{
+       page_table_entry *entry = LookupPageTableEntry(virtualAddress);
+       if (entry == NULL)
+               return false;
+
+       entry->valid = 0;
+       ppc_sync();
+       tlbie(virtualAddress);
+       eieio();
+       tlbsync();
+       ppc_sync();
+
+       return true;
+}
+
+
+size_t
+PPCVMTranslationMapClassic::MaxPagesNeededToMap(addr_t start, addr_t end) const
+{
+       return 0;
+}
+
+
+status_t
+PPCVMTranslationMapClassic::Map(addr_t virtualAddress,
+       phys_addr_t physicalAddress, uint32 attributes,
+       uint32 memoryType, vm_page_reservation* reservation)
+{
+       TRACE("map_tmap: entry pa 0x%lx va 0x%lx\n", pa, va);
+
+       // lookup the vsid based off the va
+       uint32 virtualSegmentID = VADDR_TO_VSID(fVSIDBase, virtualAddress);
+       uint32 protection = 0;
+
+       // ToDo: check this
+       // all kernel mappings are R/W to supervisor code
+       if (attributes & (B_READ_AREA | B_WRITE_AREA))
+               protection = (attributes & B_WRITE_AREA) ? PTE_READ_WRITE : 
PTE_READ_ONLY;
+
+       //dprintf("vm_translation_map.map_tmap: vsid %d, pa 0x%lx, va 0x%lx\n", 
vsid, pa, va);
+
+       PPCPagingMethodClassic* m = PPCPagingMethodClassic::Method();
+
+       // Search for a free page table slot using the primary hash value
+       uint32 hash = page_table_entry::PrimaryHash(virtualSegmentID, 
virtualAddress);
+       page_table_entry_group *group = &(m->PageTable())[hash & 
m->PageTableHashMask()];
+
+       for (int i = 0; i < 8; i++) {
+               page_table_entry *entry = &group->entry[i];
+
+               if (entry->valid)
+                       continue;
+
+               m->FillPageTableEntry(entry, virtualSegmentID, virtualAddress,
+                       physicalAddress, protection, memoryType, false);
+               fMapCount++;
+               return B_OK;
+       }
+
+       // Didn't found one, try the secondary hash value
+
+       hash = page_table_entry::SecondaryHash(hash);
+       group = &(m->PageTable())[hash & m->PageTableHashMask()];
+
+       for (int i = 0; i < 8; i++) {
+               page_table_entry *entry = &group->entry[i];
+
+               if (entry->valid)
+                       continue;
+
+               m->FillPageTableEntry(entry, virtualSegmentID, virtualAddress,
+                       physicalAddress, protection, memoryType, false);
+               fMapCount++;
+               return B_OK;
+       }
+
+       panic("vm_translation_map.map_tmap: hash table full\n");
+       return B_ERROR;
+
+#if 0//X86
+/*
+       dprintf("pgdir at 0x%x\n", pgdir);
+       dprintf("index is %d\n", va / B_PAGE_SIZE / 1024);
+       dprintf("final at 0x%x\n", &pgdir[va / B_PAGE_SIZE / 1024]);
+       dprintf("value is 0x%x\n", *(int *)&pgdir[va / B_PAGE_SIZE / 1024]);
+       dprintf("present bit is %d\n", pgdir[va / B_PAGE_SIZE / 1024].present);
+       dprintf("addr is %d\n", pgdir[va / B_PAGE_SIZE / 1024].addr);
+*/
+       page_directory_entry* pd = fPagingStructures->pgdir_virt;
+
+       // check to see if a page table exists for this range
+       uint32 index = VADDR_TO_PDENT(va);
+       if ((pd[index] & PPC_PDE_PRESENT) == 0) {
+               phys_addr_t pgtable;
+               vm_page *page;
+
+               // we need to allocate a pgtable
+               page = vm_page_allocate_page(reservation,
+                       PAGE_STATE_WIRED | VM_PAGE_ALLOC_CLEAR);
+
+               DEBUG_PAGE_ACCESS_END(page);
+
+               pgtable = (phys_addr_t)page->physical_page_number * B_PAGE_SIZE;
+
+               TRACE("map_tmap: asked for free page for pgtable. 0x%lx\n", 
pgtable);
+
+               // put it in the pgdir
+               PPCPagingMethodClassic::PutPageTableInPageDir(&pd[index], 
pgtable,
+                       attributes
+                               | ((attributes & B_USER_PROTECTION) != 0
+                                               ? B_WRITE_AREA : 
B_KERNEL_WRITE_AREA));
+
+               // update any other page directories, if it maps kernel space
+               if (index >= FIRST_KERNEL_PGDIR_ENT
+                       && index < (FIRST_KERNEL_PGDIR_ENT + 
NUM_KERNEL_PGDIR_ENTS)) {
+                       PPCPagingStructuresClassic::UpdateAllPageDirs(index, 
pd[index]);
+               }
+
+               fMapCount++;
+       }
+
+       // now, fill in the pentry
+       Thread* thread = thread_get_current_thread();
+       ThreadCPUPinner pinner(thread);
+
+       page_table_entry* pt = (page_table_entry*)fPageMapper->GetPageTableAt(
+               pd[index] & PPC_PDE_ADDRESS_MASK);
+       index = VADDR_TO_PTENT(va);
+
+       ASSERT_PRINT((pt[index] & PPC_PTE_PRESENT) == 0,
+               "virtual address: %#" B_PRIxADDR ", existing pte: %#" B_PRIx32, 
va,
+               pt[index]);
+
+       PPCPagingMethodClassic::PutPageTableEntryInTable(&pt[index], pa, 
attributes,
+               memoryType, fIsKernelMap);
+
+       pinner.Unlock();
+
+       // Note: We don't need to invalidate the TLB for this address, as 
previously
+       // the entry was not present and the TLB doesn't cache those entries.
+

[ *** diff truncated: 985 lines dropped *** ]


############################################################################

Commit:      ce686ba95793b28a7d39ca62cb7efecc40897121
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Nov  8 22:59:46 2013 UTC

PPC: Split cpu-specific files into separate objects

----------------------------------------------------------------------------

############################################################################

Commit:      8554d8ba8ae6bcde59525a93f1f7f98ac22ed69a
Author:      François Revol <revol@xxxxxxx>
Date:        Sat Dec 21 13:38:48 2013 UTC

PPC: rename arch_exceptions_44x.S to arch_exceptions_440.S

----------------------------------------------------------------------------

############################################################################

Commit:      fdef69f48617671f7d63ab54783563686e066c39
Author:      François Revol <revol@xxxxxxx>
Date:        Sat Dec 21 13:51:09 2013 UTC

PPC: compile arch_exception*.S in cpu-specific objects

----------------------------------------------------------------------------

############################################################################

Commit:      5cf56f6478a3245bd84723a9b2081be5697aadba
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Jun 12 21:47:51 2015 UTC

ppc/paging: Convert to new-style CPU management

* Aka, post-scheduler changes
* Luckily PPC paging code is very simular to x86 paging now

----------------------------------------------------------------------------

############################################################################

Commit:      7cfdb69e24e49d7328d7addf34894e7fa2e850ff
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Jun 12 22:09:35 2015 UTC

PPC: u-boot: define bootloader stack in linker script

Align with ARM stuff from hrev47834. It's not yet used though.

----------------------------------------------------------------------------

############################################################################

Commit:      404b1d988f80e5c56cf6ff2bd86e5f9cb8384f4a
Author:      François Revol <revol@xxxxxxx>
Date:        Sat Jun 13 00:55:31 2015 UTC

PPC: U-Boot: fix gUBootOS offset

Since the removal of some other variables we were overwriting some random 
function.

----------------------------------------------------------------------------

############################################################################

Commit:      67c989ec243c1e8d865ba95a31498ae0662ad5b6
Author:      François Revol <revol@xxxxxxx>
Date:        Thu Nov  5 21:32:40 2015 UTC

PPC: Use FUNCTION_END in arch_asm.S

----------------------------------------------------------------------------

############################################################################

Commit:      a9e30a4a4965b4adeb225badcb15ba3ebc321309
Author:      François Revol <revol@xxxxxxx>
Date:        Thu Nov  5 21:36:19 2015 UTC

PPC: arch_asm.S: style fix

Capitalize TODO & FIXME, gedit prefers those

----------------------------------------------------------------------------

############################################################################

Commit:      e1cb2b73073de78ac779489bb5f12f1c50682b60
Author:      François Revol <revol@xxxxxxx>
Date:        Thu Nov  5 21:37:16 2015 UTC

PPC: add stub for arch_int_assign_to_cpu

----------------------------------------------------------------------------

############################################################################

Commit:      74536e829666d392a2320a3e0aca3148c645f217
Author:      François Revol <revol@xxxxxxx>
Date:        Thu Nov  5 21:37:42 2015 UTC

PPC: add stub for arch_smp_send_multicast_ici

----------------------------------------------------------------------------

############################################################################

Commit:      2e54831a5adda856b442196eed26b2d0c85a1669
Author:      François Revol <revol@xxxxxxx>
Date:        Thu Nov  5 21:38:38 2015 UTC

PPC: call debug_uart_from_fdt with C++ linkage

----------------------------------------------------------------------------

############################################################################

Commit:      7528c1adce3c1bc4051b01aa363a88a6e19a80e7
Author:      François Revol <revol@xxxxxxx>
Date:        Sat Dec 19 01:34:50 2015 UTC

U-Boot: PPC: make the shift calculation more obvious

It's the 11th bit, counting from the MSB, on the top 16 bits.

----------------------------------------------------------------------------

############################################################################

Commit:      9dec4c56fc846fc7577582d3e76c4ce79623b219
Author:      François Revol <revol@xxxxxxx>
Date:        Sat Dec 19 01:37:27 2015 UTC

U-Boot: PPC: Try to enable unaligned transfers

This however doesn't help with the 64bit float operations that
gcc emits when assigning the physical framebuffer address in kernel_args,
which is a packed struct...

----------------------------------------------------------------------------

############################################################################

Commit:      c1d25b488ed73ca1c618f986dec34cf552e5078c
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:03:17 2016 UTC

PPC: make arch_framebuffer.h more like ARM version

Now the only difference is the physical address, which is returned as
phys_addr_t as should be.

----------------------------------------------------------------------------

############################################################################

Commit:      ec7022826a34f800e50cd42fd1f54f2b9a348af7
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:12:19 2016 UTC

HACK: U-Boot: add a GraphicsDevice arch_video prober + hardcode Sam460

Sadly even digging the RAM for a valid GraphicsDevice struct fails
on my Sam460 board... so for now I hardcode the address anyway.

TODO: clean this mess

----------------------------------------------------------------------------

############################################################################

Commit:      62ab4262f6da197a8ce20910c7b5f80fc4845a7c
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:14:53 2016 UTC

U-Boot: PPC: dump the start of the board_data struct

Not so useful but just in case...

Sadly, this struct is both compile-time and arch dependent :-(

----------------------------------------------------------------------------

############################################################################

Commit:      d3239334e52640a71e92e5dd898c906cead3aeab
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:19:34 2016 UTC

PPC: work around a gcc/binutils bug on DEBUG build

cf. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=37758

----------------------------------------------------------------------------

############################################################################

Commit:      9ce0c271426be18a0bff1997cffa8867d8015bc8
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:32:30 2016 UTC

HACK: PPC: stub out arch_cpu_user_{memcpy,memset,strlcpy}

Someone should write the PPC asm for this...

----------------------------------------------------------------------------

############################################################################

Commit:      973323571170da6b318a5e4c2b664a0feaef8405
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:35:23 2016 UTC

HACK: PPC: work around some atomic issue

TODO: Is it still needed?

----------------------------------------------------------------------------

############################################################################

Commit:      f652143b303fe64d58eaab84b0fa07102a7e1dc7
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:35:58 2016 UTC

HACK: PPC: work around some atomic issue

TODO: Is this still needed?

----------------------------------------------------------------------------

############################################################################

Commit:      84f5368be0cd9c5643a1ebcbecff7cc4e88e0c13
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:36:41 2016 UTC

FIXME: (disabled) PPC: add generic atomic implementation

----------------------------------------------------------------------------

############################################################################

Commit:      9346f5ebceab0c435fe88d3abb282235096fe90c
Author:      François Revol <revol@xxxxxxx>
Date:        Fri Apr 22 15:37:59 2016 UTC

HACK: PPC: add some build flags to try to work around alignment issue

didn't really work...

----------------------------------------------------------------------------


Other related posts:

  • » [haiku-commits] BRANCH mmu_man-github.sam460ex [9346f5ebceab] in src/system: kernel/arch/ppc/paging/classic kernel/arch/ppc boot/platform/u-boot kernel/arch/ppc/paging - mmu_man-github . sam460ex