This will be my last post but don't take it bad, I just have impo things in real life. #2618 OK, you said it might be your imagination. BTW are you aware there's no general consensus on what the PH exactly means? So let's try to define it as: 1. evoke a legit process by CreateProcess w/ CREATE_SUSPENDED, 2. find the base address by NtQueryProcessInformation & ReadProcessMemory or ReadRemotePEB & ReadRemoteImage, 3. from the address unmap its memory by NtUnmapViewOfSection, 4. calculate diff w/ the base address of your code, then allocate RWX by VirtualAllocEx, 5. write the code by WriteProcessMemory, 6. relocate the base of your code, 7. reset the thread context by GetThreadContext & SetThreadContext, 8. resume the thread by ResumeThread. You can make a behavior sig which matches this sequence w/ validation of parent, but a problem is such sig will miss all the similar techniques. The opposite extreme is blocking any CreateProcess w/ CREATE_SUSPENDED, but this will cause many FPs. I believe most solutions go somewhere in-btwn, but it depends totally on the rule coder's heuristic decision. We know HMPA PH protection has caused FPs - I only found a explicit mention for VMware ThinApp, but IIRC there was another case in past. Note it's not BB but a partial HIPS so FPs will be less problematic, as BB usually removes the exe, meaning rules for BB must be more carefully written. IDK if the alleged AV insider was real, but it seems it's becoming a common sense that traditional BB can't catch 0day malware used in real targeted attacks (tho "0day malware" is becoming a buzzword), 'cause it can only block known patterns - criminals just need to find another, this is why BB sig is updated every week. Note I distinguish HIPS & ML from BB, and your understanding is not 100% correct. Not all AV upload files and sandbox is not directly relevant to cloud ML analysis. ML doesn't care where data came from and local BB component is a good source of the data(*). Many MLs extract & compress some characteristics and map them into high-dimensional grid, then depending on the algorithm they can either group, separate, or grade the source. Despite it would have been trained w/ known data, it's agnostic about if the source is known or unknown, so can detect unseen malware and miss known one probabilisticaly, but intentionally bypass it is hard thx to its abstract nature, contrary to traditional BB. I warn not to use another blanket term like "learning malicious behavior", such thing doesn't actually happen and is only useful as an interface language. (*) A well-known technique to bypass sandbox analysis is simply to wait before doing a malicious act, but it can't deceive ML if this info was sent by local AV.