[haiku-commits] haiku: hrev54274 - src/system/kernel/vm src/system/kernel/locks headers/private/kernel/util headers/private/kernel

  • From: waddlesplash <waddlesplash@xxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Fri, 29 May 2020 21:47:45 -0400 (EDT)

hrev54274 adds 5 changesets to branch 'master'
old head: 38963e759643e05836304fbf2b3b45c6c8db3e3a
new head: 621f53700fa3e6d84bfca3de0a36db31683427ae
overview: 
https://git.haiku-os.org/haiku/log/?qt=range&q=621f53700fa3+%5E38963e759643

----------------------------------------------------------------------------

57656b93b602: kernel/locks: Implement lock switching for recursive_lock.
  
  This allows switching from another recursive_lock, mutex or read-locked
  rw_lock analogous to the switching possibilities already in mutex.
  
  With this, recursive_locks can be used in more complex situations where
  previously only mutexes would work.
  
  Also add debugger command to dump a recursive_lock.
  
  Change-Id: Ibeeae1b42c543d925dec61a3b257e1f3df7f8934
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/2834
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

428bc69ab87e: VMCache: Factor out a _FreePageRange method.
  
  The code in the Resize and Rebase methods was identical except for the
  iterator.
  
  Change-Id: I9f6b3c2c09af0c26778215bd627fed030c4d46f1
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/2835
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

4986a9a3fd3f: Revert "kernel: Remove the B_KERNEL_AREA protection flag."
  
  This reverts parts of hrev52546 that removed the B_KERNEL_AREA
  protection flag and replaced it with an address space comparison.
  
  Checking for areas in the kernel address space inside a user address
  space does not work, as areas can only ever belong to one address space.
  This rendered these checks ineffective and allowed to unmap, delete or
  resize kernel managed areas from their respective userland teams.
  
  That protection was meant to be applied to the team user data area which
  was introduced to reduce the kernel to userland overhead by directly
  sharing some data between the two. It was intended to be set up in such
  a manner that this is safe on the kernel side and the B_KERNEL_AREA flag
  was introduced specifically for this purpose.
  
  Incidentally the actual application of the B_KERNEL_AREA flag on the
  team user data area was apparently forgotten in the original commit.
  
  The absence of that protection allowed applications to induce KDLs by
  modifying the user area and generating a signal for example.
  
  This change restores the B_KERNEL_AREA flag and also applies it to the
  team user data area.
  
  Change-Id: I993bb1cf7c6ae10085100db7df7cc23fe66f4edd
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/2836
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

928d780bc9c3: kernel/vm: Factor out intersect_area and use it for cut_area.
  
  It combines the intersection check and setting address, size and offset
  so that they fall within the area.
  
  Change-Id: Iffd3feca75d4e6389d23b9d70294253b4c3d1f4c
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/2837
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

621f53700fa3: AVLTree: Add convenience LeftMost/RightMost with no arguments.
  
  They return the left and right most nodes of the entire tree, i.e.
  starting from the root node.
  
  Change-Id: I651a9db6d12308aef4c2ed71484958428e58c9bc
  Reviewed-on: https://review.haiku-os.org/c/haiku/+/2838
  Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

                                            [ Michael Lotz <mmlr@xxxxxxxx> ]

----------------------------------------------------------------------------

11 files changed, 331 insertions(+), 117 deletions(-)
headers/private/kernel/lock.h                    |  13 ++
headers/private/kernel/util/AVLTree.h            |  20 +++
headers/private/kernel/util/AVLTreeBase.h        |  18 ++-
headers/private/kernel/vm/VMCache.h              |   3 +
headers/private/system/vm_defs.h                 |   3 +
.../debugger/user_interface/util/UiUtils.cpp     |   1 +
src/system/kernel/commpage.cpp                   |   2 +-
src/system/kernel/locks/lock.cpp                 | 159 ++++++++++++++++++-
src/system/kernel/team.cpp                       |   3 +-
src/system/kernel/vm/VMCache.cpp                 | 116 ++++++--------
src/system/kernel/vm/vm.cpp                      | 110 ++++++++-----

############################################################################

Commit:      57656b93b602d41246039ff396febfd16071fa6b
URL:         https://git.haiku-os.org/haiku/commit/?id=57656b93b602
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Thu May 14 06:38:17 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sat May 30 01:47:40 2020 UTC

kernel/locks: Implement lock switching for recursive_lock.

This allows switching from another recursive_lock, mutex or read-locked
rw_lock analogous to the switching possibilities already in mutex.

With this, recursive_locks can be used in more complex situations where
previously only mutexes would work.

Also add debugger command to dump a recursive_lock.

Change-Id: Ibeeae1b42c543d925dec61a3b257e1f3df7f8934
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2834
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/headers/private/kernel/lock.h b/headers/private/kernel/lock.h
index 770c3cb895..92d0dda24a 100644
--- a/headers/private/kernel/lock.h
+++ b/headers/private/kernel/lock.h
@@ -126,6 +126,19 @@ extern void recursive_lock_destroy(recursive_lock *lock);
 extern status_t recursive_lock_lock(recursive_lock *lock);
 extern status_t recursive_lock_trylock(recursive_lock *lock);
 extern void recursive_lock_unlock(recursive_lock *lock);
+extern status_t recursive_lock_switch_lock(recursive_lock* from,
+       recursive_lock* to);
+       // Unlocks "from" and locks "to" such that unlocking and starting to 
wait
+       // for the lock is atomic. I.e. if "from" guards the object "to" belongs
+       // to, the operation is safe as long as "from" is held while destroying
+       // "to".
+extern status_t recursive_lock_switch_from_mutex(mutex* from,
+       recursive_lock* to);
+       // Like recursive_lock_switch_lock(), just for switching from a mutex.
+extern status_t recursive_lock_switch_from_read_lock(rw_lock* from,
+       recursive_lock* to);
+       // Like recursive_lock_switch_lock(), just for switching from a 
read-locked
+       // rw_lock.
 extern int32 recursive_lock_get_recursion(recursive_lock *lock);
 
 extern void rw_lock_init(rw_lock* lock, const char* name);
diff --git a/src/system/kernel/locks/lock.cpp b/src/system/kernel/locks/lock.cpp
index 2176b3d290..c6d5bcdba6 100644
--- a/src/system/kernel/locks/lock.cpp
+++ b/src/system/kernel/locks/lock.cpp
@@ -56,9 +56,7 @@ recursive_lock_get_recursion(recursive_lock *lock)
 void
 recursive_lock_init(recursive_lock *lock, const char *name)
 {
-       mutex_init(&lock->lock, name != NULL ? name : "recursive lock");
-       RECURSIVE_LOCK_HOLDER(lock) = -1;
-       lock->recursion = 0;
+       recursive_lock_init_etc(lock, name, 0);
 }
 
 
@@ -66,7 +64,9 @@ void
 recursive_lock_init_etc(recursive_lock *lock, const char *name, uint32 flags)
 {
        mutex_init_etc(&lock->lock, name != NULL ? name : "recursive lock", 
flags);
-       RECURSIVE_LOCK_HOLDER(lock) = -1;
+#if !KDEBUG
+       lock->holder = -1;
+#endif
        lock->recursion = 0;
 }
 
@@ -147,6 +147,151 @@ recursive_lock_unlock(recursive_lock *lock)
 }
 
 
+status_t
+recursive_lock_switch_lock(recursive_lock* from, recursive_lock* to)
+{
+#if KDEBUG
+       if (!gKernelStartup && !are_interrupts_enabled()) {
+               panic("recursive_lock_switch_lock(): called with interrupts "
+                       "disabled for locks %p, %p", from, to);
+       }
+#endif
+
+       if (--from->recursion > 0)
+               return recursive_lock_lock(to);
+
+#if !KDEBUG
+       from->holder = -1;
+#endif
+
+       thread_id thread = thread_get_current_thread_id();
+
+       if (thread == RECURSIVE_LOCK_HOLDER(to)) {
+               to->recursion++;
+               mutex_unlock(&from->lock);
+               return B_OK;
+       }
+
+       status_t status = mutex_switch_lock(&from->lock, &to->lock);
+       if (status != B_OK) {
+               from->recursion++;
+#if !KDEBUG
+               from->holder = thread;
+#endif
+               return status;
+       }
+
+#if !KDEBUG
+       to->holder = thread;
+#endif
+       to->recursion++;
+       return B_OK;
+}
+
+
+status_t
+recursive_lock_switch_from_mutex(mutex* from, recursive_lock* to)
+{
+#if KDEBUG
+       if (!gKernelStartup && !are_interrupts_enabled()) {
+               panic("recursive_lock_switch_from_mutex(): called with 
interrupts "
+                       "disabled for locks %p, %p", from, to);
+       }
+#endif
+
+       thread_id thread = thread_get_current_thread_id();
+
+       if (thread == RECURSIVE_LOCK_HOLDER(to)) {
+               to->recursion++;
+               mutex_unlock(from);
+               return B_OK;
+       }
+
+       status_t status = mutex_switch_lock(from, &to->lock);
+       if (status != B_OK)
+               return status;
+
+#if !KDEBUG
+       to->holder = thread;
+#endif
+       to->recursion++;
+       return B_OK;
+}
+
+
+status_t
+recursive_lock_switch_from_read_lock(rw_lock* from, recursive_lock* to)
+{
+#if KDEBUG
+       if (!gKernelStartup && !are_interrupts_enabled()) {
+               panic("recursive_lock_switch_from_read_lock(): called with 
interrupts "
+                       "disabled for locks %p, %p", from, to);
+       }
+#endif
+
+       thread_id thread = thread_get_current_thread_id();
+
+       if (thread != RECURSIVE_LOCK_HOLDER(to)) {
+               status_t status = mutex_switch_from_read_lock(from, &to->lock);
+               if (status != B_OK)
+                       return status;
+
+#if !KDEBUG
+               to->holder = thread;
+#endif
+       } else {
+#if KDEBUG_RW_LOCK_DEBUG
+               _rw_lock_write_unlock(from);
+#else
+               int32 oldCount = atomic_add(&from->count, -1);
+               if (oldCount >= RW_LOCK_WRITER_COUNT_BASE)
+                       _rw_lock_read_unlock(from);
+#endif
+       }
+
+       to->recursion++;
+       return B_OK;
+}
+
+
+static int
+dump_recursive_lock_info(int argc, char** argv)
+{
+       if (argc < 2) {
+               print_debugger_command_usage(argv[0]);
+               return 0;
+       }
+
+       recursive_lock* lock = (recursive_lock*)parse_expression(argv[1]);
+
+       if (!IS_KERNEL_ADDRESS(lock)) {
+               kprintf("invalid address: %p\n", lock);
+               return 0;
+       }
+
+       kprintf("recrusive_lock %p:\n", lock);
+       kprintf("  mutex:           %p\n", &lock->lock);
+       kprintf("  name:            %s\n", lock->lock.name);
+       kprintf("  flags:           0x%x\n", lock->lock.flags);
+#if KDEBUG
+       kprintf("  holder:          %" B_PRId32 "\n", lock->lock.holder);
+#else
+       kprintf("  holder:          %" B_PRId32 "\n", lock->holder);
+#endif
+       kprintf("  recursion:       %d\n", lock->recursion);
+
+       kprintf("  waiting threads:");
+       mutex_waiter* waiter = lock->lock.waiters;
+       while (waiter != NULL) {
+               kprintf(" %" B_PRId32, waiter->thread->id);
+               waiter = waiter->next;
+       }
+       kputs("\n");
+
+       return 0;
+}
+
+
 //     #pragma mark -
 
 
@@ -1002,4 +1147,10 @@ lock_debug_init()
                "<lock>\n"
                "Prints info about the specified rw lock.\n"
                "  <lock>  - pointer to the rw lock to print the info for.\n", 
0);
+       add_debugger_command_etc("recursivelock", &dump_recursive_lock_info,
+               "Dump info about a recursive lock",
+               "<lock>\n"
+               "Prints info about the specified recursive lock.\n"
+               "  <lock>  - pointer to the recursive lock to print the info 
for.\n",
+               0);
 }

############################################################################

Commit:      428bc69ab87e0c292882660da33acd68cff729dc
URL:         https://git.haiku-os.org/haiku/commit/?id=428bc69ab87e
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Fri May 22 22:00:38 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sat May 30 01:47:40 2020 UTC

VMCache: Factor out a _FreePageRange method.

The code in the Resize and Rebase methods was identical except for the
iterator.

Change-Id: I9f6b3c2c09af0c26778215bd627fed030c4d46f1
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2835
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/headers/private/kernel/vm/VMCache.h 
b/headers/private/kernel/vm/VMCache.h
index d5185b93b7..0aee4f9210 100644
--- a/headers/private/kernel/vm/VMCache.h
+++ b/headers/private/kernel/vm/VMCache.h
@@ -215,6 +215,9 @@ private:
                        void                            
_MergeWithOnlyConsumer();
                        void                            
_RemoveConsumer(VMCache* consumer);
 
+                       bool                            
_FreePageRange(VMCachePagesTree::Iterator it,
+                                                                       
page_num_t* toPage);
+
 private:
                        int32                           fRefCount;
                        mutex                           fLock;
diff --git a/src/system/kernel/vm/VMCache.cpp b/src/system/kernel/vm/VMCache.cpp
index f0f626c50d..d820333513 100644
--- a/src/system/kernel/vm/VMCache.cpp
+++ b/src/system/kernel/vm/VMCache.cpp
@@ -1111,6 +1111,46 @@ VMCache::SetMinimalCommitment(off_t commitment, int 
priority)
 }
 
 
+bool
+VMCache::_FreePageRange(VMCachePagesTree::Iterator it,
+       page_num_t* toPage = NULL)
+{
+       for (vm_page* page = it.Next();
+               page != NULL && (toPage == NULL || page->cache_offset < 
*toPage);
+               page = it.Next()) {
+
+               if (page->busy) {
+                       if (page->busy_writing) {
+                               // We cannot wait for the page to become 
available
+                               // as we might cause a deadlock this way
+                               page->busy_writing = false;
+                                       // this will notify the writer to free 
the page
+                               continue;
+                       }
+
+                       // wait for page to become unbusy
+                       WaitForPageEvents(page, PAGE_EVENT_NOT_BUSY, true);
+                       return true;
+               }
+
+               // remove the page and put it into the free queue
+               DEBUG_PAGE_ACCESS_START(page);
+               vm_remove_all_page_mappings(page);
+               ASSERT(page->WiredCount() == 0);
+                       // TODO: Find a real solution! If the page is wired
+                       // temporarily (e.g. by lock_memory()), we actually 
must not
+                       // unmap it!
+               RemovePage(page);
+                       // Note: When iterating through a IteratableSplayTree
+                       // removing the current node is safe.
+
+               vm_page_free(this, page);
+       }
+
+       return false;
+}
+
+
 /*!    This function updates the size field of the cache.
        If needed, it will free up all pages that don't belong to the cache 
anymore.
        The cache lock must be held when you call it.
@@ -1133,44 +1173,16 @@ VMCache::Resize(off_t newSize, int priority)
        if (status != B_OK)
                return status;
 
-       uint32 oldPageCount = (uint32)((virtual_end + B_PAGE_SIZE - 1)
+       page_num_t oldPageCount = (page_num_t)((virtual_end + B_PAGE_SIZE - 1)
+               >> PAGE_SHIFT);
+       page_num_t newPageCount = (page_num_t)((newSize + B_PAGE_SIZE - 1)
                >> PAGE_SHIFT);
-       uint32 newPageCount = (uint32)((newSize + B_PAGE_SIZE - 1) >> 
PAGE_SHIFT);
 
        if (newPageCount < oldPageCount) {
                // we need to remove all pages in the cache outside of the new 
virtual
                // size
-               for (VMCachePagesTree::Iterator it
-                                       = pages.GetIterator(newPageCount, true, 
true);
-                               vm_page* page = it.Next();) {
-                       if (page->busy) {
-                               if (page->busy_writing) {
-                                       // We cannot wait for the page to 
become available
-                                       // as we might cause a deadlock this way
-                                       page->busy_writing = false;
-                                               // this will notify the writer 
to free the page
-                               } else {
-                                       // wait for page to become unbusy
-                                       WaitForPageEvents(page, 
PAGE_EVENT_NOT_BUSY, true);
-
-                                       // restart from the start of the list
-                                       it = pages.GetIterator(newPageCount, 
true, true);
-                               }
-                               continue;
-                       }
-
-                       // remove the page and put it into the free queue
-                       DEBUG_PAGE_ACCESS_START(page);
-                       vm_remove_all_page_mappings(page);
-                       ASSERT(page->WiredCount() == 0);
-                               // TODO: Find a real solution! If the page is 
wired
-                               // temporarily (e.g. by lock_memory()), we 
actually must not
-                               // unmap it!
-                       RemovePage(page);
-                       vm_page_free(this, page);
-                               // Note: When iterating through a 
IteratableSplayTree
-                               // removing the current node is safe.
-               }
+               while (_FreePageRange(pages.GetIterator(newPageCount, true, 
true)))
+                       ;
        }
 
        virtual_end = newSize;
@@ -1199,43 +1211,13 @@ VMCache::Rebase(off_t newBase, int priority)
        if (status != B_OK)
                return status;
 
-       uint32 basePage = (uint32)(newBase >> PAGE_SHIFT);
+       page_num_t basePage = (page_num_t)(newBase >> PAGE_SHIFT);
 
        if (newBase > virtual_base) {
                // we need to remove all pages in the cache outside of the new 
virtual
-               // size
-               VMCachePagesTree::Iterator it = pages.GetIterator();
-               for (vm_page* page = it.Next();
-                               page != NULL && page->cache_offset < basePage;
-                               page = it.Next()) {
-                       if (page->busy) {
-                               if (page->busy_writing) {
-                                       // We cannot wait for the page to 
become available
-                                       // as we might cause a deadlock this way
-                                       page->busy_writing = false;
-                                               // this will notify the writer 
to free the page
-                               } else {
-                                       // wait for page to become unbusy
-                                       WaitForPageEvents(page, 
PAGE_EVENT_NOT_BUSY, true);
-
-                                       // restart from the start of the list
-                                       it = pages.GetIterator();
-                               }
-                               continue;
-                       }
-
-                       // remove the page and put it into the free queue
-                       DEBUG_PAGE_ACCESS_START(page);
-                       vm_remove_all_page_mappings(page);
-                       ASSERT(page->WiredCount() == 0);
-                               // TODO: Find a real solution! If the page is 
wired
-                               // temporarily (e.g. by lock_memory()), we 
actually must not
-                               // unmap it!
-                       RemovePage(page);
-                       vm_page_free(this, page);
-                               // Note: When iterating through a 
IteratableSplayTree
-                               // removing the current node is safe.
-               }
+               // base
+               while (_FreePageRange(pages.GetIterator(), &basePage))
+                       ;
        }
 
        virtual_base = newBase;

############################################################################

Commit:      4986a9a3fd3fd37e29dbd35a4b138c6e93ffb454
URL:         https://git.haiku-os.org/haiku/commit/?id=4986a9a3fd3f
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun May 24 20:14:09 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sat May 30 01:47:40 2020 UTC

Revert "kernel: Remove the B_KERNEL_AREA protection flag."

This reverts parts of hrev52546 that removed the B_KERNEL_AREA
protection flag and replaced it with an address space comparison.

Checking for areas in the kernel address space inside a user address
space does not work, as areas can only ever belong to one address space.
This rendered these checks ineffective and allowed to unmap, delete or
resize kernel managed areas from their respective userland teams.

That protection was meant to be applied to the team user data area which
was introduced to reduce the kernel to userland overhead by directly
sharing some data between the two. It was intended to be set up in such
a manner that this is safe on the kernel side and the B_KERNEL_AREA flag
was introduced specifically for this purpose.

Incidentally the actual application of the B_KERNEL_AREA flag on the
team user data area was apparently forgotten in the original commit.

The absence of that protection allowed applications to induce KDLs by
modifying the user area and generating a signal for example.

This change restores the B_KERNEL_AREA flag and also applies it to the
team user data area.

Change-Id: I993bb1cf7c6ae10085100db7df7cc23fe66f4edd
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2836
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/headers/private/system/vm_defs.h b/headers/private/system/vm_defs.h
index fa318288e7..eefa895e60 100644
--- a/headers/private/system/vm_defs.h
+++ b/headers/private/system/vm_defs.h
@@ -23,6 +23,9 @@
 //     flags region in the protection field.
 #define B_OVERCOMMITTING_AREA  (1 << 12)
 #define B_SHARED_AREA                  (1 << 13)
+#define B_KERNEL_AREA                  (1 << 14)
+       // Usable from userland according to its protection flags, but the area
+       // itself is not deletable, resizable, etc from userland.
 
 #define B_USER_AREA_FLAGS              \
        (B_USER_PROTECTION | B_OVERCOMMITTING_AREA | B_CLONEABLE_AREA)
diff --git a/src/kits/debugger/user_interface/util/UiUtils.cpp 
b/src/kits/debugger/user_interface/util/UiUtils.cpp
index 5e65d72c8c..848981f3a8 100644
--- a/src/kits/debugger/user_interface/util/UiUtils.cpp
+++ b/src/kits/debugger/user_interface/util/UiUtils.cpp
@@ -230,6 +230,7 @@ UiUtils::AreaProtectionFlagsToString(uint32 protection, 
BString& _output)
        ADD_AREA_FLAG_IF_PRESENT(B_OVERCOMMITTING_AREA, protection, _output, 
"o",
                "");
        ADD_AREA_FLAG_IF_PRESENT(B_SHARED_AREA, protection, "S", _output, "");
+       ADD_AREA_FLAG_IF_PRESENT(B_KERNEL_AREA, protection, "k", _output, "");
 
        if (protection != 0) {
                char buffer[32];
diff --git a/src/system/kernel/commpage.cpp b/src/system/kernel/commpage.cpp
index 4aa05f6640..d895feb001 100644
--- a/src/system/kernel/commpage.cpp
+++ b/src/system/kernel/commpage.cpp
@@ -66,7 +66,7 @@ clone_commpage_area(team_id team, void** address)
        if (*address == NULL)
                *address = (void*)KERNEL_USER_DATA_BASE;
        return vm_clone_area(team, "commpage", address,
-               B_RANDOMIZED_BASE_ADDRESS, B_READ_AREA | B_EXECUTE_AREA,
+               B_RANDOMIZED_BASE_ADDRESS, B_READ_AREA | B_EXECUTE_AREA | 
B_KERNEL_AREA,
                REGION_PRIVATE_MAP, sCommPageArea, true);
 }
 
diff --git a/src/system/kernel/team.cpp b/src/system/kernel/team.cpp
index e5f159de18..ec5339a5ae 100644
--- a/src/system/kernel/team.cpp
+++ b/src/system/kernel/team.cpp
@@ -1378,7 +1378,8 @@ create_team_user_data(Team* team, void* exactAddress = 
NULL)
 
        physical_address_restrictions physicalRestrictions = {};
        team->user_data_area = create_area_etc(team->id, "user area",
-               kTeamUserDataInitialSize, B_FULL_LOCK, B_READ_AREA | 
B_WRITE_AREA, 0, 0,
+               kTeamUserDataInitialSize, B_FULL_LOCK,
+               B_READ_AREA | B_WRITE_AREA | B_KERNEL_AREA, 0, 0,
                &virtualRestrictions, &physicalRestrictions, &address);
        if (team->user_data_area < 0)
                return team->user_data_area;
diff --git a/src/system/kernel/vm/vm.cpp b/src/system/kernel/vm/vm.cpp
index ecf90c5bc8..105bfc8181 100644
--- a/src/system/kernel/vm/vm.cpp
+++ b/src/system/kernel/vm/vm.cpp
@@ -822,7 +822,7 @@ unmap_address_range(VMAddressSpace* addressSpace, addr_t 
address, addr_t size,
                                VMArea* area = it.Next();) {
                        addr_t areaLast = area->Base() + (area->Size() - 1);
                        if (area->Base() < lastAddress && address < areaLast) {
-                               if (area->address_space == 
VMAddressSpace::Kernel()) {
+                               if ((area->protection & B_KERNEL_AREA) != 0) {
                                        dprintf("unmap_address_range: team %" 
B_PRId32 " tried to "
                                                "unmap range of kernel area %" 
B_PRId32 " (%s)\n",
                                                team_get_current_team_id(), 
area->id, area->name);
@@ -2147,6 +2147,9 @@ vm_clone_area(team_id team, const char* name, void** 
address,
                if (status != B_OK)
                        return status;
 
+               if (!kernel && (sourceArea->protection & B_KERNEL_AREA) != 0)
+                       return B_NOT_ALLOWED;
+
                sourceArea->protection |= B_SHARED_AREA;
                protection |= B_SHARED_AREA;
        }
@@ -2172,6 +2175,9 @@ vm_clone_area(team_id team, const char* name, void** 
address,
        if (sourceArea == NULL)
                return B_BAD_VALUE;
 
+       if (!kernel && (sourceArea->protection & B_KERNEL_AREA) != 0)
+               return B_NOT_ALLOWED;
+
        VMCache* cache = vm_area_get_locked_cache(sourceArea);
 
        if (!kernel && sourceAddressSpace != targetAddressSpace
@@ -2348,8 +2354,8 @@ vm_delete_area(team_id team, area_id id, bool kernel)
 
        cacheLocker.Unlock();
 
-       // SetFromArea will have returned an error if the area's owning team is 
not
-       // the same as the passed team, so we don't need to do those checks 
here.
+       if (!kernel && (area->protection & B_KERNEL_AREA) != 0)
+               return B_NOT_ALLOWED;
 
        delete_area(locker.AddressSpace(), area, false);
        return B_OK;
@@ -2633,7 +2639,8 @@ vm_set_area_protection(team_id team, area_id areaID, 
uint32 newProtection,
 
                cacheLocker.SetTo(cache, true); // already locked
 
-               if (!kernel && area->address_space == VMAddressSpace::Kernel()) 
{
+               if (!kernel && (area->address_space == VMAddressSpace::Kernel()
+                               || (area->protection & B_KERNEL_AREA) != 0)) {
                        dprintf("vm_set_area_protection: team %" B_PRId32 " 
tried to "
                                "set protection %#" B_PRIx32 " on kernel area 
%" B_PRId32
                                " (%s)\n", team, newProtection, areaID, 
area->name);
@@ -5065,7 +5072,8 @@ vm_resize_area(area_id areaID, size_t newSize, bool 
kernel)
                cacheLocker.SetTo(cache, true); // already locked
 
                // enforce restrictions
-               if (!kernel && area->address_space == VMAddressSpace::Kernel()) 
{
+               if (!kernel && (area->address_space == VMAddressSpace::Kernel()
+                               || (area->protection & B_KERNEL_AREA) != 0)) {
                        dprintf("vm_resize_area: team %" B_PRId32 " tried to "
                                "resize kernel area %" B_PRId32 " (%s)\n",
                                team_get_current_team_id(), areaID, area->name);
@@ -6542,7 +6550,7 @@ _user_set_memory_protection(void* _address, size_t size, 
uint32 protection)
                        if (area == NULL)
                                return B_NO_MEMORY;
 
-                       if (area->address_space == VMAddressSpace::Kernel())
+                       if ((area->protection & B_KERNEL_AREA) != 0)
                                return B_NOT_ALLOWED;
 
                        // TODO: For (shared) mapped files we should check 
whether the new

############################################################################

Commit:      928d780bc9c3c9d6109a12552014e6f46c5ae6bf
URL:         https://git.haiku-os.org/haiku/commit/?id=928d780bc9c3
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun May 24 22:09:42 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sat May 30 01:47:40 2020 UTC

kernel/vm: Factor out intersect_area and use it for cut_area.

It combines the intersection check and setting address, size and offset
so that they fall within the area.

Change-Id: Iffd3feca75d4e6389d23b9d70294253b4c3d1f4c
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2837
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/src/system/kernel/vm/vm.cpp b/src/system/kernel/vm/vm.cpp
index 105bfc8181..850d8c09a8 100644
--- a/src/system/kernel/vm/vm.cpp
+++ b/src/system/kernel/vm/vm.cpp
@@ -606,6 +606,34 @@ unmap_pages(VMArea* area, addr_t base, size_t size)
 }
 
 
+static inline bool
+intersect_area(VMArea* area, addr_t& address, addr_t& size, addr_t& offset)
+{
+       if (address < area->Base()) {
+               offset = area->Base() - address;
+               if (offset >= size)
+                       return false;
+
+               address = area->Base();
+               size -= offset;
+               offset = 0;
+               if (size > area->Size())
+                       size = area->Size();
+
+               return true;
+       }
+
+       offset = address - area->Base();
+       if (offset >= area->Size())
+               return false;
+
+       if (size >= area->Size() - offset)
+               size = area->Size() - offset;
+
+       return true;
+}
+
+
 /*!    Cuts a piece out of an area. If the given cut range covers the complete
        area, it is deleted. If it covers the beginning or the end, the area is
        resized accordingly. If the range covers some part in the middle of the
@@ -616,15 +644,14 @@ unmap_pages(VMArea* area, addr_t base, size_t size)
 */
 static status_t
 cut_area(VMAddressSpace* addressSpace, VMArea* area, addr_t address,
-       addr_t lastAddress, VMArea** _secondArea, bool kernel)
+       addr_t size, VMArea** _secondArea, bool kernel)
 {
-       // Does the cut range intersect with the area at all?
-       addr_t areaLast = area->Base() + (area->Size() - 1);
-       if (area->Base() > lastAddress || areaLast < address)
+       addr_t offset;
+       if (!intersect_area(area, address, size, offset))
                return B_OK;
 
        // Is the area fully covered?
-       if (area->Base() >= address && areaLast <= lastAddress) {
+       if (address == area->Base() && size == area->Size()) {
                delete_area(addressSpace, area, false);
                return B_OK;
        }
@@ -650,23 +677,20 @@ cut_area(VMAddressSpace* addressSpace, VMArea* area, 
addr_t address,
                && cache->consumers.IsEmpty() && cache->type == CACHE_TYPE_RAM;
 
        // Cut the end only?
-       if (areaLast <= lastAddress) {
-               size_t oldSize = area->Size();
-               size_t newSize = address - area->Base();
-
-               status_t error = addressSpace->ShrinkAreaTail(area, newSize,
+       if (offset > 0 && size == area->Size() - offset) {
+               status_t error = addressSpace->ShrinkAreaTail(area, offset,
                        allocationFlags);
                if (error != B_OK)
                        return error;
 
                // unmap pages
-               unmap_pages(area, address, oldSize - newSize);
+               unmap_pages(area, address, size);
 
                if (onlyCacheUser) {
                        // Since VMCache::Resize() can temporarily drop the 
lock, we must
                        // unlock all lower caches to prevent locking order 
inversion.
                        cacheChainLocker.Unlock(cache);
-                       cache->Resize(cache->virtual_base + newSize, priority);
+                       cache->Resize(cache->virtual_base + offset, priority);
                        cache->ReleaseRefAndUnlock();
                }
 
@@ -674,29 +698,24 @@ cut_area(VMAddressSpace* addressSpace, VMArea* area, 
addr_t address,
        }
 
        // Cut the beginning only?
-       if (area->Base() >= address) {
-               addr_t oldBase = area->Base();
-               addr_t newBase = lastAddress + 1;
-               size_t newSize = areaLast - lastAddress;
-               size_t newOffset = newBase - oldBase;
-
-               // unmap pages
-               unmap_pages(area, oldBase, newOffset);
-
+       if (area->Base() == address) {
                // resize the area
-               status_t error = addressSpace->ShrinkAreaHead(area, newSize,
+               status_t error = addressSpace->ShrinkAreaHead(area, 
area->Size() - size,
                        allocationFlags);
                if (error != B_OK)
                        return error;
 
+               // unmap pages
+               unmap_pages(area, address, size);
+
                if (onlyCacheUser) {
                        // Since VMCache::Rebase() can temporarily drop the 
lock, we must
                        // unlock all lower caches to prevent locking order 
inversion.
                        cacheChainLocker.Unlock(cache);
-                       cache->Rebase(cache->virtual_base + newOffset, 
priority);
+                       cache->Rebase(cache->virtual_base + size, priority);
                        cache->ReleaseRefAndUnlock();
                }
-               area->cache_offset += newOffset;
+               area->cache_offset += size;
 
                return B_OK;
        }
@@ -704,9 +723,9 @@ cut_area(VMAddressSpace* addressSpace, VMArea* area, addr_t 
address,
        // The tough part -- cut a piece out of the middle of the area.
        // We do that by shrinking the area to the begin section and creating a
        // new area for the end section.
-       addr_t firstNewSize = address - area->Base();
-       addr_t secondBase = lastAddress + 1;
-       addr_t secondSize = areaLast - lastAddress;
+       addr_t firstNewSize = offset;
+       addr_t secondBase = address + size;
+       addr_t secondSize = area->Size() - offset - size;
 
        // unmap pages
        unmap_pages(area, address, area->Size() - firstNewSize);
@@ -805,7 +824,7 @@ cut_area(VMAddressSpace* addressSpace, VMArea* area, addr_t 
address,
 }
 
 
-/*!    Deletes all areas in the given address range.
+/*!    Deletes or cuts all areas in the given address range.
        The address space must be write-locked.
        The caller must ensure that no part of the given range is wired.
 */
@@ -834,15 +853,12 @@ unmap_address_range(VMAddressSpace* addressSpace, addr_t 
address, addr_t size,
 
        for (VMAddressSpace::AreaIterator it = addressSpace->GetAreaIterator();
                        VMArea* area = it.Next();) {
-               addr_t areaLast = area->Base() + (area->Size() - 1);
-               if (area->Base() < lastAddress && address < areaLast) {
-                       status_t error = cut_area(addressSpace, area, address,
-                               lastAddress, NULL, kernel);
-                       if (error != B_OK)
-                               return error;
-                               // Failing after already messing with areas is 
ugly, but we
-                               // can't do anything about it.
-               }
+               status_t error = cut_area(addressSpace, area, address, size, 
NULL,
+                       kernel);
+               if (error != B_OK)
+                       return error;
+                       // Failing after already messing with areas is ugly, 
but we
+                       // can't do anything about it.
        }
 
        return B_OK;

############################################################################

Revision:    hrev54274
Commit:      621f53700fa3e6d84bfca3de0a36db31683427ae
URL:         https://git.haiku-os.org/haiku/commit/?id=621f53700fa3
Author:      Michael Lotz <mmlr@xxxxxxxx>
Date:        Sun May 24 22:49:37 2020 UTC
Committer:   waddlesplash <waddlesplash@xxxxxxxxx>
Commit-Date: Sat May 30 01:47:40 2020 UTC

AVLTree: Add convenience LeftMost/RightMost with no arguments.

They return the left and right most nodes of the entire tree, i.e.
starting from the root node.

Change-Id: I651a9db6d12308aef4c2ed71484958428e58c9bc
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2838
Reviewed-by: waddlesplash <waddlesplash@xxxxxxxxx>

----------------------------------------------------------------------------

diff --git a/headers/private/kernel/util/AVLTree.h 
b/headers/private/kernel/util/AVLTree.h
index f0e27f83db..c59021e8a8 100644
--- a/headers/private/kernel/util/AVLTree.h
+++ b/headers/private/kernel/util/AVLTree.h
@@ -47,7 +47,9 @@ public:
                        Value*                          Previous(Value* value) 
const;
                        Value*                          Next(Value* value) 
const;
 
+                       Value*                          LeftMost() const;
                        Value*                          LeftMost(Value* value) 
const;
+                       Value*                          RightMost() const;
                        Value*                          RightMost(Value* value) 
const;
 
        inline  Iterator                        GetIterator();
@@ -258,6 +260,15 @@ AVLTree<Definition>::Next(Value* value) const
 }
 
 
+template<typename Definition>
+inline typename AVLTree<Definition>::Value*
+AVLTree<Definition>::LeftMost() const
+{
+       AVLTreeNode* node = fTree.LeftMost();
+       return node != NULL ? _GetValue(node) : NULL;
+}
+
+
 template<typename Definition>
 inline typename AVLTree<Definition>::Value*
 AVLTree<Definition>::LeftMost(Value* value) const
@@ -270,6 +281,15 @@ AVLTree<Definition>::LeftMost(Value* value) const
 }
 
 
+template<typename Definition>
+inline typename AVLTree<Definition>::Value*
+AVLTree<Definition>::RightMost() const
+{
+       AVLTreeNode* node = fTree.RightMost();
+       return node != NULL ? _GetValue(node) : NULL;
+}
+
+
 template<typename Definition>
 inline typename AVLTree<Definition>::Value*
 AVLTree<Definition>::RightMost(Value* value) const
diff --git a/headers/private/kernel/util/AVLTreeBase.h 
b/headers/private/kernel/util/AVLTreeBase.h
index ea9c91de5e..26ce613065 100644
--- a/headers/private/kernel/util/AVLTreeBase.h
+++ b/headers/private/kernel/util/AVLTreeBase.h
@@ -46,7 +46,9 @@ public:
 
        inline  AVLTreeNode*            Root() const    { return fRoot; }
 
+       inline  AVLTreeNode*            LeftMost() const;
                        AVLTreeNode*            LeftMost(AVLTreeNode* node) 
const;
+       inline  AVLTreeNode*            RightMost() const;
                        AVLTreeNode*            RightMost(AVLTreeNode* node) 
const;
 
                        AVLTreeNode*            Previous(AVLTreeNode* node) 
const;
@@ -190,11 +192,25 @@ protected:
 };
 
 
+inline AVLTreeNode*
+AVLTreeBase::LeftMost() const
+{
+       return LeftMost(fRoot);
+}
+
+
+inline AVLTreeNode*
+AVLTreeBase::RightMost() const
+{
+       return RightMost(fRoot);
+}
+
+
 // GetIterator
 inline AVLTreeIterator
 AVLTreeBase::GetIterator() const
 {
-       return AVLTreeIterator(this, NULL, LeftMost(fRoot));
+       return AVLTreeIterator(this, NULL, LeftMost());
 }
 
 


Other related posts:

  • » [haiku-commits] haiku: hrev54274 - src/system/kernel/vm src/system/kernel/locks headers/private/kernel/util headers/private/kernel - waddlesplash