[haiku-commits] haiku: hrev54529 - src/system/kernel/vm src/system/kernel headers/private/kernel/vm

  • From: waddlesplash <waddlesplash@xxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sat, 22 Aug 2020 20:56:02 -0400 (EDT)

hrev54529 adds 3 changesets to branch 'master'
old head: bafb111359242803e54cb3b339e943af6bb336d7
new head: 481b113707a4f6d88f9e9916f11d663d8aea96cf
overview: 
https://git.haiku-os.org/haiku/log/?qt=range&q=481b113707a4+%5Ebafb11135924

----------------------------------------------------------------------------

75a10a74e889: kernel/vm: Make vm_copy_area take page protections into account.
  
  When copying an area with vm_copy_area only the new protection would be
  applied and any possibly existing page protections on the source area
  were ignored.
  
  For areas with stricter area protection than page protection, this lead
  to faults when accessing the copy. In the opposite case it lead to too
  relaxed protection. The currently only user of vm_copy_area is
  fork_team which goes through all areas of the parent and copies them to
  the new team. Hence page protections were ignored on all forked teams.
  
  Remove the protection argument and instead always carry over the source
  area protection and duplicate the page protections when present.
  
  Also make sure to take the page protections into account for deciding
  whether or not the copy is writable and therefore needs to have copy on
  write semantics.
  
  Change-Id: I52f295f2aaa66e31b4900b754343b3be9a19ba30
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/3166
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

5984257d436e: kernel/vm: Avoid page protection overflows for very large areas.
  
  For areas >= 32TiB the page protection array size would overflow.
  
  Change-Id: Ic95d9a6e35bbedb165c2bbd382f6c47edde07ac2
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/3167
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

481b113707a4: kernel/vm: Dump page protections array address in area command.
  
  Had to abbreviate the trailing 's' to make it fit the alignment...
  
  Change-Id: Iae88c4cd92c3f54bf3ea3433ea3dafe5df90a8e8
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/3168
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

                                            [ Michael Lotz <mmlr@xxxxxxxx> ]

----------------------------------------------------------------------------

3 files changed, 39 insertions(+), 18 deletions(-)
headers/private/kernel/vm/vm.h |  2 +-
src/system/kernel/team.cpp     |  2 +-
src/system/kernel/vm/vm.cpp    | 53 ++++++++++++++++++++++++++------------

############################################################################

Commit:      75a10a74e88903ca9a7e8ff450680a877a75f6cd
URL:         https://git.haiku-os.org/haiku/commit/?id=75a10a74e889
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun Aug 23 00:27:42 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sun Aug 23 00:55:58 2020 UTC

kernel/vm: Make vm_copy_area take page protections into account.

When copying an area with vm_copy_area only the new protection would be
applied and any possibly existing page protections on the source area
were ignored.

For areas with stricter area protection than page protection, this lead
to faults when accessing the copy. In the opposite case it lead to too
relaxed protection. The currently only user of vm_copy_area is
fork_team which goes through all areas of the parent and copies them to
the new team. Hence page protections were ignored on all forked teams.

Remove the protection argument and instead always carry over the source
area protection and duplicate the page protections when present.

Also make sure to take the page protections into account for deciding
whether or not the copy is writable and therefore needs to have copy on
write semantics.

Change-Id: I52f295f2aaa66e31b4900b754343b3be9a19ba30
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3166
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/headers/private/kernel/vm/vm.h b/headers/private/kernel/vm/vm.h
index 9cbf9848e4..715a75d39d 100644
--- a/headers/private/kernel/vm/vm.h
+++ b/headers/private/kernel/vm/vm.h
@@ -113,7 +113,7 @@ void vm_area_put_locked_cache(struct VMCache *cache);
 area_id vm_create_null_area(team_id team, const char *name, void **address,
                        uint32 addressSpec, addr_t size, uint32 flags);
 area_id vm_copy_area(team_id team, const char *name, void **_address,
-                       uint32 addressSpec, uint32 protection, area_id 
sourceID);
+                       uint32 addressSpec, area_id sourceID);
 area_id vm_clone_area(team_id team, const char *name, void **address,
                        uint32 addressSpec, uint32 protection, uint32 mapping,
                        area_id sourceArea, bool kernel);
diff --git a/src/system/kernel/team.cpp b/src/system/kernel/team.cpp
index ec5339a5ae..8fc10a8087 100644
--- a/src/system/kernel/team.cpp
+++ b/src/system/kernel/team.cpp
@@ -2138,7 +2138,7 @@ fork_team(void)
                } else {
                        void* address;
                        area_id area = vm_copy_area(team->address_space->ID(), 
info.name,
-                               &address, B_CLONE_ADDRESS, info.protection, 
info.area);
+                               &address, B_CLONE_ADDRESS, info.area);
                        if (area < B_OK) {
                                status = area;
                                break;
diff --git a/src/system/kernel/vm/vm.cpp b/src/system/kernel/vm/vm.cpp
index cc6aff51fb..36d8a3141b 100644
--- a/src/system/kernel/vm/vm.cpp
+++ b/src/system/kernel/vm/vm.cpp
@@ -2535,17 +2535,8 @@ vm_copy_on_write_area(VMCache* lowerCache,
 
 area_id
 vm_copy_area(team_id team, const char* name, void** _address,
-       uint32 addressSpec, uint32 protection, area_id sourceID)
+       uint32 addressSpec, area_id sourceID)
 {
-       bool writableCopy = (protection & (B_KERNEL_WRITE_AREA | B_WRITE_AREA)) 
!= 0;
-
-       if ((protection & B_KERNEL_PROTECTION) == 0) {
-               // set the same protection for the kernel as for userland
-               protection |= B_KERNEL_READ_AREA;
-               if (writableCopy)
-                       protection |= B_KERNEL_WRITE_AREA;
-       }
-
        // Do the locking: target address space, all address spaces associated 
with
        // the source cache, and the cache itself.
        MultiAddressSpaceLocker locker;
@@ -2618,6 +2609,30 @@ vm_copy_area(team_id team, const char* name, void** 
_address,
                vm_page_reservation*    fReservation;
        } pagesUnreserver(wiredPages > 0 ? &wiredPagesReservation : NULL);
 
+       bool writableCopy
+               = (source->protection & (B_KERNEL_WRITE_AREA | B_WRITE_AREA)) 
!= 0;
+       uint8* targetPageProtections = NULL;
+
+       if (source->page_protections != NULL) {
+               size_t bytes = (source->Size() / B_PAGE_SIZE + 1) / 2;
+               targetPageProtections = (uint8*)malloc_etc(bytes,
+                       HEAP_DONT_LOCK_KERNEL_SPACE);
+               if (targetPageProtections == NULL)
+                       return B_NO_MEMORY;
+
+               memcpy(targetPageProtections, source->page_protections, bytes);
+
+               if (!writableCopy) {
+                       for (size_t i = 0; i < bytes; i++) {
+                               if ((targetPageProtections[i]
+                                               & (B_WRITE_AREA | B_WRITE_AREA 
<< 4)) != 0) {
+                                       writableCopy = true;
+                                       break;
+                               }
+                       }
+               }
+       }
+
        if (addressSpec == B_CLONE_ADDRESS) {
                addressSpec = B_EXACT_ADDRESS;
                *_address = (void*)source->Base();
@@ -2631,12 +2646,17 @@ vm_copy_area(team_id team, const char* name, void** 
_address,
        addressRestrictions.address = *_address;
        addressRestrictions.address_specification = addressSpec;
        status = map_backing_store(targetAddressSpace, cache, 
source->cache_offset,
-               name, source->Size(), source->wiring, protection,
+               name, source->Size(), source->wiring, source->protection,
                sharedArea ? REGION_NO_PRIVATE_MAP : REGION_PRIVATE_MAP,
                writableCopy ? 0 : CREATE_AREA_DONT_COMMIT_MEMORY,
                &addressRestrictions, true, &target, _address);
-       if (status < B_OK)
+       if (status < B_OK) {
+               free_etc(targetPageProtections, HEAP_DONT_LOCK_KERNEL_SPACE);
                return status;
+       }
+
+       if (targetPageProtections != NULL)
+               target->page_protections = targetPageProtections;
 
        if (sharedArea) {
                // The new area uses the old area's cache, but 
map_backing_store()
@@ -2647,7 +2667,7 @@ vm_copy_area(team_id team, const char* name, void** 
_address,
        // If the source area is writable, we need to move it one layer up as 
well
 
        if (!sharedArea) {
-               if ((source->protection & (B_KERNEL_WRITE_AREA | B_WRITE_AREA)) 
!= 0) {
+               if (writableCopy) {
                        // TODO: do something more useful if this fails!
                        if (vm_copy_on_write_area(cache,
                                        wiredPages > 0 ? &wiredPagesReservation 
: NULL) < B_OK) {

############################################################################

Commit:      5984257d436e07a1304f07722eaabcad9220a562
URL:         https://git.haiku-os.org/haiku/commit/?id=5984257d436e
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun Aug 23 00:41:31 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sun Aug 23 00:55:58 2020 UTC

kernel/vm: Avoid page protection overflows for very large areas.

For areas >= 32TiB the page protection array size would overflow.

Change-Id: Ic95d9a6e35bbedb165c2bbd382f6c47edde07ac2
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3167
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/src/system/kernel/vm/vm.cpp b/src/system/kernel/vm/vm.cpp
index 36d8a3141b..37f2632d5d 100644
--- a/src/system/kernel/vm/vm.cpp
+++ b/src/system/kernel/vm/vm.cpp
@@ -462,7 +462,7 @@ allocate_area_page_protections(VMArea* area)
 {
        // In the page protections we store only the three user protections,
        // so we use 4 bits per page.
-       uint32 bytes = (area->Size() / B_PAGE_SIZE + 1) / 2;
+       size_t bytes = (area->Size() / B_PAGE_SIZE + 1) / 2;
        area->page_protections = (uint8*)malloc_etc(bytes,
                HEAP_DONT_LOCK_KERNEL_SPACE);
        if (area->page_protections == NULL)
@@ -481,7 +481,7 @@ static inline void
 set_area_page_protection(VMArea* area, addr_t pageAddress, uint32 protection)
 {
        protection &= B_READ_AREA | B_WRITE_AREA | B_EXECUTE_AREA;
-       uint32 pageIndex = (pageAddress - area->Base()) / B_PAGE_SIZE;
+       addr_t pageIndex = (pageAddress - area->Base()) / B_PAGE_SIZE;
        uint8& entry = area->page_protections[pageIndex / 2];
        if (pageIndex % 2 == 0)
                entry = (entry & 0xf0) | protection;
@@ -5228,7 +5228,7 @@ vm_resize_area(area_id areaID, size_t newSize, bool 
kernel)
        if (status == B_OK) {
                // Shrink or grow individual page protections if in use.
                if (area->page_protections != NULL) {
-                       uint32 bytes = (newSize / B_PAGE_SIZE + 1) / 2;
+                       size_t bytes = (newSize / B_PAGE_SIZE + 1) / 2;
                        uint8* newProtections
                                = (uint8*)realloc(area->page_protections, 
bytes);
                        if (newProtections == NULL)

############################################################################

Revision:    hrev54529
Commit:      481b113707a4f6d88f9e9916f11d663d8aea96cf
URL:         https://git.haiku-os.org/haiku/commit/?id=481b113707a4
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun Aug 23 00:44:42 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sun Aug 23 00:55:58 2020 UTC

kernel/vm: Dump page protections array address in area command.

Had to abbreviate the trailing 's' to make it fit the alignment...

Change-Id: Iae88c4cd92c3f54bf3ea3433ea3dafe5df90a8e8
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3168
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/src/system/kernel/vm/vm.cpp b/src/system/kernel/vm/vm.cpp
index 37f2632d5d..d9f6c999f4 100644
--- a/src/system/kernel/vm/vm.cpp
+++ b/src/system/kernel/vm/vm.cpp
@@ -3435,6 +3435,7 @@ dump_area_struct(VMArea* area, bool mappings)
        kprintf("base:\t\t0x%lx\n", area->Base());
        kprintf("size:\t\t0x%lx\n", area->Size());
        kprintf("protection:\t0x%" B_PRIx32 "\n", area->protection);
+       kprintf("page_protection:%p\n", area->page_protections);
        kprintf("wiring:\t\t0x%x\n", area->wiring);
        kprintf("memory_type:\t%#" B_PRIx32 "\n", area->MemoryType());
        kprintf("cache:\t\t%p\n", area->cache);


Other related posts:

  • » [haiku-commits] haiku: hrev54529 - src/system/kernel/vm src/system/kernel headers/private/kernel/vm - waddlesplash