Recently announced on Jessica Hyde’s weekly YouTube Live show, #CacheUp, we’ll be kicking off the Magnet Weekly CTF Challenge for anyone in the community who would like to participate. With 2020 being such a challenging year for many, our goal is to provide a little fun for participants in Quarter 4, as we wrap up the year. Now, let’s talk specifics about the Magnet Weekly CTF Challenge!
The Details
When
The first question for the Magnet Weekly CTF Challenge goes live October 5 at 11:00AM ET. During the month of October, we will be utilizing an Android image which can be found here. Feel free to download and pre-process in your tool of choice, so you’re ready to go!
Also, keep an eye on social media and your inbox as we’ll be providing a weekly video Mondays at 11:00AM to announce what the current week’s challenge will be as well as discussing previous week’s solve.
Head on over now to register for the Magnet Weekly CTF! Once on the landing page, you’ll simply click Register in the top right corner of the screen. After registering, you’ll be all set to start competing against your fellow forensicators!
Members of the forensic community can be quite competitive,
so let’s discuss how the Magnet Weekly CTF Challenge will be scored.
Each Monday at 11:00AM ET, starting with the launch of the event on October 5, a new weekly challenge will be unlocked for participants to solve. For those who are interested, you’ll have one week to solve the question, followed by one additional week to write a blog for additional points. The Magnet team will review the questions two weeks after they are initially posted, providing the solution to the solve on the weekly video. Each week when the new question is unlocked, the previous week’s question will be removed, so make sure to get your answers in!
With that, let’s discuss scoring!
Scoring
With the Weekly Magnet CTF Challenge we want to give as many
ways as possible for participants to score points. Each week’s challenge will
have a set points value based on the complexity of the solve. These point evaluations
will range from 10-75 points. In addition to
receiving points for providing the correct answer to each week’s challenge, you
can also receive additional points in a variety of ways listed below.
Point Value
Description
25
Blogging the solve–make sure to tag #MagnetWeeklyCTF in a write-up of your solve on social media.
50
Successfully creating a new custom artifact (if applicable) and having it posted on our Artifact Exchange. (Points awarded to the first person who successfully submits their custom artifact for review)
Who doesn’t love winning prizes? This CTF will be based on the cumulative score a user earns throughout the 4th quarter of 2020 (Oct-Dec). Whoever has the high score at the end, wins the grand prize.
But wait, there’s more! Throughout the challenge we will
also randomly select weekly challenge winners in a prize drawing each week, as
well as the selecting a monthly winner. The monthly winner will be based on whose
been awarded the most points during the given month.
If you have any questions don’t hesitate to reach out over on the Magnet Forensics Discord Server, Weekly CTF Challenge Channel!
In iOS, one of the more vexing things I’ve found when working through data or helping a student with questions usually comes back to tracking what application is responsible for putting data in a specific place. With some of the fantastic work done by others including Alexis Brignoni on the ApplicationState.db as part of the FrontBoard directory, it has always become one of my first go-to spots to build a “treasure map” of applications to deal with those annoying AppGUIDs that Apple assigns each app on a device. These annoying things I speak of can be found when you’re looking for data in:
/private/var/mobile/Containers/Data/Application
Luckily, most tools will parse out the ApplicationState.db and map each one of these unique IDs to the application which is stored within.
Great! So much easier to go and figure out what apps are living where. However sometimes you stumble upon a file of interest within a folder and you’re left with the task of matching the directory path to this database. Maybe you’re in a situation where you’re working with just the raw image and limited access to tools as well. How can we find the app’s bundleID from within a directory already?
Within the application data path, at the root there should be a file “.com.apple.mobile_container_manager.metadata.plist” which seems to be same name in each application directory. This information will contain keys that contain the bundleID of the application which is great if you’re in a pinch and don’t want to jump back and forth.
The more interesting thing is what happens when you do a search for this file across your iOS device. If you do, you’ll see that the .com.apple.mobile_container_manager.metadata.plist file appears in a lot of places including:
Whoa. That’s a lot more places for us to explore to make our treasure map. So what is this file anyway? First, let’s talk about Sandboxing. Apple heavily utilities sandboxing in iOS. This is to prevent applications from getting access to data they’re not supposed to have access to. Each application is given its own sandbox to play in and only that area to play in. This plist file allows us to see what sandbox we’re in and who owns that sandbox from an application perspective. Using this information, we can break down a little bit more of this path information above to figure out why certain apps may be keeping data in a location.
/private/var/containers/Bundle/Application/
This directory is where the .app lives on the device. There’s some additional data we can track here about the application itself and who downloaded it onto the device. Along with the .app, there’s an iTunesMetadata and BundleMetadata plist file that can list out information such as when the application was downloaded, what version of the app was downloaded, and what AppleID actually downloaded it.
/private/var/containers/Shared/SystemGroup/
This directory is similar to the one above, but speaks to core applications of the iOS device. There’s less information in here but still a .plist file that can reveal what system application is responsible for the container. /
private/var/containers/Data/System/
Again, similar to the directory above, but these seemed to be system apps that didn’t want to share information between core applications. Again, less relevant information except for the bundleID who owns the container.
/private/var/mobile/Containers/Shared/AppGroup/
Now this is where the REAL fun begins! I mentioned sandboxing earlier. Apple says that applications are not allowed to share information without requesting it through official channels first. In order to share information, application developers can assign a “group” to their application. According to Apple’s developer information, these groups can then share data between each other. You may have also seen these in backup-style images that are listed as “AppDomainGroup-group.bundleID” instead of the “AppDomain-com.bundle.ID” structure. I have often used this path when I couldn’t find the data I was looking for in the main Application Data path.
Now for the downside. The ApplicationState.db doesn’t contain information about this path. The upside? Each application’s “Shared/AppGroup” directory will have one of our .com.apple.mobile_container_manager.metadata.plist files! Woohoo! To make it even better, Alexis Brignoni has built support into iLeapp in order to list out out all of these files from this directory allowing us to match the contents of the file to the path it lives in. [Get iLeapp here]
An application may also have more than one folder in this path. A couple of examples would be the Dropbox application that over 4 different containers here and the Spark email app which had 2. More importantly, some applications may keep all of their relevant data here instead of the /private/var/mobile/Containers/Application/Data/ directory. An example of this would be the Spark email application (iTunes web link) which chooses to store all of its relevant databases within the /private/var/mobile/Containers/Shared/AppGroup/ directory structure instead of the more common /private/var/mobile/Containers/Application/Data/ directory structure. [Other notable examples of this include WhatsApp and Signal]
/private/var/mobile/Containers/Application/Data
The place where we’re supposed to look for app data. Well understood and documented. Just, not where we always end up wanting to look after all.
In my test devices these appeared to be Apple services (or internal daemons, obviously) that could be tracked using the same .com.apple.mobile_container_manager_metadata.plist files.
I’ve had two situations recently that I’ve been assisting students with to understand where crucial data to their cases has wound up within this area. Populating exact matching data here is difficult but a few situations arose in which I was able to track a good bit of information. Because Apple is allowing applications to use plugins to tie together (think of the Giphy keyboard, my personal favorite), these plugins can keep data within this directory structure. One test case was to figure out why a bunch of illicit videos were showing up within a specific directory. By using the .com.apple.mobile_container_manager.metadata.plist file, the analyst was able to figure out what plugin was helping to put data in this directory and what bundleID the plugin belonged to. In this case, it was tied to com.apple.mobileslideshow’s PhotoMessagesApp.
Note: com.apple.mobileslideshow is the Photos application on iOS.
Now that we know what plugin and what application the directory belongs to, we can then go out and try to generate data to prove how this information gets in here. In my example, I used the photo picker plugin within iMessage and then modified the video directly within that plugin without launching the original com.apple.mobileslideshow application. Using KnowledgeC, you can see in the following screenshots that when these files were created within the PluginKitPlugin/APPGUID/ directory structure that the MobileSMS application and other plugins overlap.
Other plugins such as ‘com.apple.mobileslideshow.photo-picker’ have come up in other investigations but sometimes it’s difficult to populate these directories without KnowledgeC or PowerLog to go on. By at least understanding the owning BundleID, we can start to understand possibilities for how data got where it is.
Now we have a better way to build out our own treasure maps as well as now knowing that .com.apple.mobile_container_manager.metadata.plist files are typically going to provide the “X” that marks the spot.
This post was authored by Christopher Vance, Manager, Curriculum Development at Magnet Forensics. It also appears on his D20 Forensics Blog.
We’re excited to announce the availability of Magnet OUTRIDER 2.0! Magnet OUTRIDER is an ultra-fast triage tool that empowers law enforcement and examiners to quickly and easily preview devices, on-scene or in the lab.
Now with Magnet OUTRIDER 2.0, you can scan internet history files and capture more data with advanced live system scan options, with even faster speed compared to prior versions of OUTRIDER.
To upgrade to OUTRIDER 2.0, head over to the Customer Portal to download the latest version.
If you haven’t tried OUTRIDER yet, see for yourself how fast and simple OUTRIDER is at finding CSAM and illicit apps. Request your free 30-day trial here.
Get the Intel You Need with 30% Faster Scans
Now, scans with OUTRIDER are 30% faster giving you the advantage of time on-scene to find critical evidence and intel, minimize disruptions to the community and giving you a head start on device triage when you’re back in the lab.
No other tool provides forensic examiners with ultra-fast and accurate scans to support ICAC teams on-scene with actionable intelligence.
(Based on our internal testing, an example test case of >1M files were scanned in an average of 41 seconds, a 30% improvement over prior versions of OUTRIDER, inclusive of additional data capture features introduced in v2.0.)
Scan Internet History for Keywords
In addition to the CSAM-related keyword list that is included in OUTRIDER, you can now import a NCMEC CyberTip report to bring in URLs and file names as keywords for locating files or matching on browser history.
Or, for more flexibility, create your own URL/keyword list and upload it to OUTRIDER. IP addresses imported from a NCMEC CyberTip report will also be used to alert you if an imported IP address matches the current external IP address for the live system being scanned.
Capture More with Advanced Live System Scan Options
For scans of live systems, Magnet OUTRIDER can collect operating system artifacts, capture RAM, take a screenshot of the desktop, and obtain the external IP address for the system.
More Time-Saving Features in OUTRIDER 2.0
MORE IDENTIFIED APPS, MORE INTEL FOR YOUR INVESTIGATORS
Now, OUTRIDER lets you generate more actional intelligence for your on-scene investigations or to help you more effectively triage devices in the lab. We’ve added new application categories such as select VPNs, Messaging, and Gaming applications which can indicate nefarious activity and/or child grooming.
START YOUR SCAN QUICKER WITH A SIMPLIFIED SET-UP PROCESS
We’ve refreshed the OUTRIDER user interface and scan configuration to simplify and speed up the set-up process. The new set-up screens flow through set-up to scan options over three separate steps to streamline configuration. OUTRIDER saves your settings so that you don’t need to waste time reconfiguring with each scan.
Also, each scan option now has a traffic-light style visual indicator letting you know how much time an additional data capture may add to your search so that you can make informed decisions on the spot.
MAXIMIZE YOUR TIME IN THE LAB WITH TARGETED SCANNING
Your time in the lab is just as valuable as it is in-field, don’t waste it scanning full drives, computers or servers when you only need to examine specific folders. Now you can save even more time by performing very targeted scans on one or more selected folders. Additionally, you can scan mobile extractions by saving the mobile image to a dedicated folder and performing a targeted scan on it.
Try Magnet OUTRIDER 2.0 Today!
If you’re already using Magnet OUTRIDER, download OUTRIDER 2.0 in the Customer Portal. If you want to see how Magnet OUTRIDER can help you dramatically reduce the time is takes to find CSAM on computers and external drives, request a free trial today!
Introducing one of our newest Magnet Forensics Trainers, Jerry Hewitt.
Jerry comes to us from an extensive background in UK law enforcement and, as a trainer, loves learning from his students. Check out our interview with him below!
MF: Tell us about your life before becoming a Trainer.
JH: Like many people I followed my fathers’ footsteps into Law Enforcement. I have always had an interest in technology and gadgets. I remember the excitement of getting my first electronic calculator and LED digital watch!
Building my fist ZX80 Sinclair computer in the late 1970’s was my start in computing. Becoming a licensed Amateur Radio Operator in the early 1980’s lead me to connect the two hobbies and my life with computers well and truly started.
While this was going on, I was a proud member of Northumbria Police. Moving from Uniform beat patrol to the Motor Patrols Department and then Air Support where I was the training officer. It was during my time there that I managed to link up some maritime navigation software on a laptop to a Tank, Trimble GPS system, giving us or first moving map display, back in 1995.
Some time spent as an authorised firearms officer (Not that many of us at the time in the UK) then eventually into the Criminal Investigation Department (CID). At the same time, I worked as a remote staffer with AOL UK, yep, I do still have an AOL Email address, in the Computing Help Forum where I tried to assist members with their PC Problems. During my time in CID I investigated a great deal of cases which involved Child Sexual Exploitation and Digital Forensics.
In 2009, after thirty years’ service, I retired from the Police Force only to go back three months later, as a High-Tech Crime Unit Forensic Investigator. It was during this time that I first came across JadSoftware’s Internet Evidence Finder (IEF). Six years after that and I became the Digital Forensic Unit Manager running a team of 30 staff and, with other stakeholders, helped make Northumbria Police DFU the excellent Unit it is today.
MF: What made you want to be a Trainer?
JH: Throughout my career I have been involved in the training of staff. I have been a Tutor Constable in both the uniform branch and in the CID. I have trained staff to use technical airborne equipment such as thermal imaging cameras, complex radio systems, GPS and Tactical tracking equipment, along with airborne navigation and police tactics. Even recently, I regularly provided classroom inputs to police officers and staff, lawyers and judges.
I really enjoy standing in front of a group of people and, hopefully, holding their attention. It’s a great feeling when the presentation takes on a life of its own and everyone participates. To see people learning the subject, then questioning it, and then using the imparted knowledge to dig deeper, is very rewarding. My aim is always to teach what they need but then send them away wanting to know more. Hopefully, I’ll get some feedback to improve my skills too!
MF: What type of training have you taken part in personally? What is your favorite part of the role?
JH: Due to the diverse nature of UK Policing I can’t remember a time when I hadn’t just been on a course, was about to go on one or was fighting to get the course I wanted. Everything from Advanced Driving techniques, Firearms training, Air Support courses and eventually CID training. Then after moving into the world of Digital Forensics completing a variety of courses both internally and externally where my Forensic skills were improved.
MF: What excites you the most about a new class?
JH: I always enjoy meeting new people. I know that I will learn something from them and that I will, hopefully, be able to improve their knowledge and understanding too. I really enjoy the diversity and, though sometimes the low-ball questions can put you on the spot, I like the fact I will be challenged as I will try to challenge the students.
Its nice when the class runs smoothly, but sometimes it’s the ones where things go wrong that end up being the most memorable. Fire Alarms and blue screen of death are always fun to deal with.
MF: Do you ever learn anything from the students?
JH: Even though I may teach the same topics over and over, each session is as unique as the students, every day is a school day for me too.
We all know just how hard it is to keep up with all the new devices, trends, apps and software. We are all constantly playing catch up so as the students can come from a variety of backgrounds there are always some new tips and tricks that they can bring into the lesson and, when they do, I will be happy to share to the next group
MF: Is there a particular moment that stands out the most to you in your career in the classroom?
JH: It wasn’t so much classroom based as helicopter based. I was completing the final check ride for a Police Air Observer who was reaching the last few days of his training course. He was a keen student but, sadly, I didn’t think he had the aptitude to navigate from the air and deal with all the other issues that were going on in his headset. This flight was ‘make or break’, and he knew it. I had tried all sorts of variations to try and teach him, but I was running out of ideas and even doubted my own ability. Half an hour into the flight he looked up and wow!! He had done it. He knew where we were, found the target, had dealt with the radios and formulated the tactics. On his very last flight he got it …. I am not sure who was happier, him or me. That was a great moment, and he got to do the job he had always wanted to do. More of those moments please… maybe on the ground this time though!
Lately, we have had to move the training online, thanks to COVID-19 virus. Challenging times but these will be met.
MF: What do students get out of training in person that they can’t get on their own?
JH: Self teaching has its merits, but it doesn’t always mean that you will find the best way of completing the task. The best way can sometimes be a simple keystroke or an in-depth analysis. Having an instructor with experience and knowledge can set the student down the right path. Where there are numerous people in the group, they will bounce ideas off each other during the breaks and even over lunch or dinner. Shared knowledge and experiences are invaluable, though it does mean that, as an instructor, you can end up with some really difficult questions to get answers for!
Learning is an enriching experience; people make it even more so.
MF: How prepared do you feel students are to use Magnet Forensics products after taking the training course?
JH: From my previous experience, every student that has carried out product specific training has always returned to their role with more knowledge and more confidence in their own ability and in that of the product.
MF: What is most unique about Magnet Forensics’ approach to training?
JH: Magnet Forensics’ ethics very closely match that of UK Law Enforcement. Their approach is to give the tools to Investigators and examiners to get the job done. Magnet Forensics Trainers have a solid background in Law Enforcement where it is all about the artifacts and how they relate to the investigation. The training is based on this model so the right data can be extracted and reported on in the best fashion possible.
MF: Why do you think certification is important to examiners?
JH: In the UK, Law Enforcement is going through a difficult but necessary process in having their methods for Digital Forensics accredited and validated. It is only right that the competency of the DFU Investigators can also be seen. Having a Magnet Certified Forensic Examiner certificate shows to all that the relevant training has been carried out on the tool that is in use. It lends credibility to the evidence and the Investigator.
MF: How do you manage to keep up on the latest trends in digital forensics?
JH: I have been passionate about technology for a long time. I like to watch trends and am always interested in what’s new. The Internet of Things means that there are always new toys and gadgets on the market. Ask my wife, our house has voice activated everything! There is always something new coming out and I always will try and get the opportunity to see how that device’s data could be used in an investigation. Where I can I go to trade shows or simply spend time scouring the internet forums. Research and development is something every forensic investigator needs to have a passion for. Meeting peers is a great way of learning and is something that should be encouraged. I have just enrolled in an online course for Open Source Intelligent, this will give me a better insight when I am teaching this topic.
MF: What trends do you see coming down the pipeline in digital forensics?
JH: As long as there is not a slowdown in technology due to COVID-19, I think the biggest change we are facing is the move from offline data storage and dead box forensics, to Online Cloud and Mobile evidence. With the imminent arrival of 5G and the change of user habits this will be where the evidence is found in the future … Watch this space!
Thank you, Jerry! Welcome to the Training team and to Magnet Forensics overall—we look forward to seeing your future contributions.
For the last few years now, most forensic examinations of iOS devices were limited to data only available in an iTunes backup and only if you had the user’s passcode. Sure, you may have gotten the odd jailbroken device, but it typically didn’t matter whether you had a ten thousand-dollar commercial forensics tool or a free acquisition tool like Magnet ACQUIRE, you were getting the same thing, an iTunes backup / logical collection of files. If you didn’t have the user’s passcode, you weren’t getting anything, so a backup was better than nothing.
Enter
Grayshift, the makers of GrayKey, a tool which allows law enforcement to crack
the user’s passcode, bypassing the Data Protection delay and gaining access to
the entire file system of iOS devices. This not only has provided examiners
with access to devices that were previously inaccessible due to not having the
passcode, but also gave them access to iOS data that hasn’t been available in
years due to the limited data available via logical collections. In some cases,
GrayKey revealed data we’ve never had the opportunity to investigate before as
well!
As many know, GrayKey is only an acquisition tool, meaning it will allow examiners to gain entry into iOS devices, and makes extractions of the information found, but it doesn’t assist with any analysis. Data acquired via a GrayKey extraction is outputted into a variety of zip containers (BFU, AFU, Full Files, mem, and a keychain.plist. Magnet AXIOM can then be used for analysis of these files.
For investigators, the ideal GrayKey image you want when examining an iOS device is the files.zip. This has the entire iOS file system present and provides the maximum amount of information for examiners to use in the course of their examination. While the other available image types provide great forensic value for case work, when available always exam the files.zip image first.
Your GrayKey will also produce a passwords.txt and a HTML report of the device extraction. While the passwords.txt list is great for examiners to look through, make sure to load the keychain.plist into AXIOM for parsing, not the passwords.txt file.
Keep in Mind: Even if you have the user’s passcode, still utilize the GrayKey for the extraction, so that you have as much data as possible to work with for your investigation.
The next container is mem.zip. This is a process memory dump of the iOS device. Prior to Grayshift’s technology, examiners hadn’t acquired process memory of iOS devices for routine analysis. Memory acquisition on Android can be done but as it requires a phone restart, the investigative value is minimal. In this case, the iOS process memory contains valuable information and should most certainly be loaded into AXIOM as well.
Finally, the last file is the keychain.plist. Most examiners are familiar with the iOS keychain as it contains the user accounts, passwords, and keys for many of the apps that the user has saved or used which can also be valuable for investigators wanting to authenticate to cloud sources or otherwise. The keychain that GrayKey creates is slightly different than the one you would get in an iTunes backup or found natively on a jailbroken device. The keychain found in the file system is actually a SQLite database and hasn’t always been available due to limitations in acquisitions prior to this. The keychain in an iTunes backup is also a plist but is formatted differently so we’ve added specific support for the GrayKey keychain.plist in AXIOM.
Now that we’ve highlighted different exports examiners get from utilizing a GrayKey in their acquisition phase of their iOS investigation, let’s dive into analyzing this data with Magnet AXIOM.
Loading GrayKey Evidence into AXIOM
There are several ways to load your recently acquired
iOS device into AXIOM. Depending on the needs of your investigation, you may
find one method better than others for your workflow.
To start, instead of loading your files.zip file you’ve acquired, I’d first recommend loading the keychain.plist in via:
One of the great benefits of using AXIOM for your analysis is the ability to add in multiple pieces of evidence all at the same time before processing begins, saving investigators time. That being said, when dealing with iOS devices that you have the keychain.plist for, it’s beneficial for the examiner to process the just the keychain and to review the data of that sole piece of evidence before navigating to the Case Dashboard of AXIOM Examine and hitting “Add Evidence”. Why you ask? Great question! If an examiner has already reviewed the keychain.plist, they will have a good idea of the apps that may be found on the suspect device. Better yet, when they load the files.zip into AXIOM for processing, they can supply potential passwords and key values for encrypted apps like SnapChat, WickrMe, or iOS Notes, so that during processing AXIOM can decrypt these databases for the examiner to analyze during the course of their investigation, without the need of having to re-process the evidence. We’ve added information within the artifact selection category for apps that we can decrypt during processing for examiners to reference when copying information out of the keychain into AXIOM, as seen below.
Next let’s look at loading in additional GrayKey evidence files into AXIOM.
To load GrayKey images into AXIOM, you can follow the same path as most other iOS images by going Mobile -> iOS -> Load Evidence -> Image and then choosing the files.zip first. Next, load the mem.zip in the same manner.
It’s important to note, make sure you select “Image” versus file and folder when loading in the files.zip. This will allow AXIOM to parse / carve the maximum amount of information out of the image.
For agencies with an online GrayKey there is an additional option when it comes to loading your acquisitions into AXIOM. In conjunction with Magnet’s exclusive partnership with Grayshift, we have a direct connection which allows examiners to connect AXIOM directly to their GrayKey via a network connection. This direct-connect option has several benefits over the traditional procedure of downloading the images via a browser from your GrayKey before starting a case with your analysis software. The first benefit of using the direct connect option is speed; using this procedure reduces the steps needed to start your investigation overall, and when it comes to the volume of iOS devices you’ve acquiring, this time savings can really mount up. Secondly, AXIOM will prompt you on where to save the GrayKey image too as its acquired and processed for examination. As a part of this process we will also automatically hash the files we are acquiring, so that you can quickly confirm from the GrayKey GUI your hash values match. On numerous occasions we’ve heard investigators say that when they loaded their GrayKey image into their analysis tool, things seemed “off”, or the image couldn’t be loaded. This is in part to potentially the browser capping the size of the download, or a packet dropping during the download of the image file.
To load evidence via the
direct connect method, users will go Mobile -> iOS -> Connect to GrayKey.
Once connected, examiners can browser the available data that’s saved on the
GrayKey, selecting the different evidence files they wish to acquire and
process with AXIOM.
Once loaded you can choose whatever options and artifacts you wish to include in AXIOM for your given investigation. One feature that may help get additional data that otherwise wouldn’t be included in an artifact is the Dynamic App Finder (DAF).
We have also had a great time co-presenting at various events over this time (both live and obviously now virtual), including our Magnet User Summit, Magnet Virtual Summit, and Virtually Together events, as well as Techno Security, National ICAC Conference, and various live webinars. We’ve been so happy to hear from customers and attendees that they’ve learned a lot from our sessions and managed to find news ways to get and analyze iOS evidence.
As we continue to progress our exclusive partnership, we wanted to take a moment to reiterate some of the ways you can use Magnet AXIOM and GrayKey together to get the most evidence out of your investigations. We will be sharing several blogs over the next few weeks highlighting these areas. For a more comprehensive how-to document, check out this blog from Trey Amick.
A Partnership That Demonstrates the Deepest Levels of Analysis
The technical partnership we have developed with Grayshift allows Magnet Forensics research, development, and training efforts to focus on supporting the deepest level analysis from devices accessed with GrayKey — including parsing and extracting data from GrayKey extractions.
Our R&D teams also collaborate directly with the development teams at Grayshift to truly understand the data that is parsed and presented. This helps us find critical insights in data acquired from images created by Grayshift tools. Our goal is to bring new artifacts and insights to the data extracted from encrypted mobile devices by Grayshift tools.
We also recognize the need for same day extractions provide by GrayKey. Through testing and research, we have discovered there are sensitive artifacts such as those found in Safari History, KnowledgeC, PowerLog, and ScreenTime. In order to parse the maximum results, you will need to acquire data as soon as possible. The GrayKey makes same day extractions from locked iOS devices a possibility. Without same day extractions, you may loose valuable data for your case. The combination of Magnet AXIOM + GrayKey allows digital forensic examiners to obtain relevant results from full content images that GrayKey provides.
Learn More About AXIOM & GrayKey in Our AX301 Course
As part of our relationship with GrayKey we offer MAGaK (Magnet AXIOM & GrayKey) Advanced iOS Examinations (AX301). This four-day course, available to law-enforcement agencies cleared by Grayshift, provides hands-on use of the GrayKey device and analysis of data from the acquired images.
In addition to using the GrayKey device to understand how to gain access to information previously unavailable by most forensic techniques, AX301 deep dives the artifacts core to the iOS file system. The class covers exclusive files system artifacts such as location history, third-party applications, and more. For those students who are looking for an advanced iOS analysis course, we offer Magnet AXIOM Advanced iOS Examinations (AX302) focusing on the same deep diving of iOS artifacts offered in AX301.
Use Magnet AUTOMATE to Orchestrate the Processing of
GrayKey Images
If you are regularly creating mobile images with GrayKey, you can use AUTOMATE to orchestrate the process of analysis once the acquisition is complete. With the introduction of Watch Folders in AUTOMATE 2.2, the orchestration can take a completed GrayKey acquisition and process it with AXIOM, iLEAPP and other tools in your workflow such that you can start analysis as soon as processing is complete.
With Magnet AXIOM 4.6, we’ve updated and added a whole slew of new Mac and iOS artifacts for examiners to use in their investigations.
In this blog, we’ll review some of these new artifacts like the Rebuilt Desktop for macOS and artifacts now to support Google Docs, Drive, Sheets, and Slides for iOS. With the impeding release of macOS 11 (Big Sur) we’ve been hard at work validating and testing AXIOM and AXIOM Cyber and they are ready to go!
Magnet AXIOM 4.6 and Magnet AXIOM Cyber 4.6 are now available — upgrade today in-product or over at the Customer Portal.
Rebuilt Desktop for macOS
Initially launched in AXIOM 3.9 for Windows, the new Rebuilt Desktop for Mac artifact provides examiners with an approximation of what a user’s desktop resembled without the need to virtualize the endpoint.
Many examiners as a part of court prep are asked to provide exhibits to help provide clear understanding to the evidence for non-technical stakeholders. Instead of spending the time virtualizing the environment, as a part of processing with AXIOM and AXIOM Cyber we’ll automatically provide a glimpse as to what the user’s desktop looked like as you can see below. Make sure to check out Chris Vance’s blog on the Rebuilt Desktop for macOS here.
Google Apps for iOS
Productivity apps such as Google Slides, Sheets, Docs, and Drive have been gaining popularity as an alternative to Microsoft Office for many individuals and organizations alike. In AXIOM 4.6 we support parsing evidence from several Google productivity apps for iOS. These apps include Google Docs, Drive, Slides, and Sheets. This can be extremely useful in cases such as insider threat where an individual is opening Microsoft Word on their iOS device, copying text from potentially a highly sensitive document (HSD) then pasting that information in a personal Google Docs text file saved on their Google Drive.
Big Sur
The Magnet team has diligently tested both the acquisition and analysis of beta versions of Big Sur set to release this fall from Apple. AXIOM Cyber’s remote acquisition capabilities throughout testing of macOS 11 have connected time and time again to endpoints running the Big Sur betas without any issue, and easily collected evidence for investigations. AXIOM and AXIOM Cyber also parse macOS 11 forensic images with no problem, so come launch day AXIOM is ready to support your Mac investigations!
Additional Updates
Additional updates to many macOS and iOS artifacts included in the AXIOM and AXIOM 4.6 release are:
In Magnet AXIOM 4.6, we’re happy to bring a new refined result to the table: “Rebuilt Desktops”, similar to the one we introduced earlier for Windows operating system but this time for macOS!
This hit can be found under the REFINED RESULTS artifact
category at the top of the navigation pane within the artifact explorer in
AXIOM. Each record will reflect the desktops for each user of the macOS
computer. Once a record is selected, it will display information in two
sections in the content pane.
First, the details card will display the user account, the
path to the wallpaper file, and the display resolution. This card will also
contain the standard source, location, and evidence information which will help
users see where the desktop information is recovered from.
A PREVIEW card will also be generated that will show a recreation of the user’s desktop to the best of AXIOM’s ability using the information available from the source files.
In the figure above, you can see that AXIOM will rebuild the
desktop with the following pieces:
The desktop wallpaper (if there is one)
The dock
The menu bar
Any files on the desktop
Reminder: this image does not actually exist on the drive!
It is a recreation of the user’s desktop by combining information from several
source files across the drive.
The desktop wallpaper is recovered from the
“desktoppicture.db” file, which will point the user to where the actual
wallpaper lives on the system. If this file is present, it will be represented
in the preview card.
To rebuild the dock, the “com.apple.dock.plist” file is used to figure out what icons appear in the doc in the persistent applications, recent applications, and persistent other sections. These icons are then recovered from the disk if available and rendered within the graphic. This gives the user a great visual representation of the “Dock Items” artifact found under the OPERATING SYSTEM category.
The menu bar is rebuilt in the same way from the
com.apple.systemuiserver.plist file and can visualize the records represented
in the Menu Bar Apps artifact.
Any files located on the Desktop will also be represented within the generated graphic; however, the order and position of the files will not be preserved. All files will be represented with a generic graphic unless the file is a picture, then it will be presented as a thumbnail of that image.
We hope that this artifact will give users a better visual representation of what the user has configured their desktop to look like as well as help reveal information about the user’s behavior. It’s one thing to say that a file is in the Desktop folder, but another completely to show that file as well as any wallpaper they could have set. As they say, a picture is worth a thousand words!
Magnet AXIOM 4.6 and Magnet AXIOM Cyber 4.6 are now available—upgrade today in-product or over at the Customer Portal.
With AXIOM and AXIOM Cyber 4.6, we’ve introduced powerful new artifacts to help you get more from your Mac investigations, added new customization options to Portable Case, and included the ability to export your geolocation data.
We’ve also made remote acquisitions up to four times faster with improved performance in AXIOM Cyber.
Find out more about these new
features, along with new and updated artifact support below.
New in AXIOM & AXIOM Cyber: Gather More Evidence from Macs
AXIOM and AXIOM Cyber 4.6
include new artifacts for Macs including support for macOS Big Sur (version 11.0)
and Rebuilt Desktop.
macOS Big Sur (version 11.0)
When Apple releases macOS Big Sur (version 11.0 — scheduled to be generally available later this year), AXIOM and AXIOM Cyber will support it from day one. Since most Mac users update their OS within first few days of availability, to ensure that you’re getting the evidence you need, it’s important to have a Mac forensic tool that will support the latest versions as they’re made available.
With the release of AXIOM 3.9, we introduced the Rebuilt Desktop for Windows artifact. Since its release we’ve had overwhelmingly positive feedback about it so we extended the support of that Rebuilt Desktop artifact to Macs as well with the release of 4.6.
With
the Rebuilt Desktop artifact, you get a visualization
of what a user’s desktop looked like at the time of imaging including apps, files and
folders and the background image. This information can prove to be invaluable to
understand a user’s behavior or possible intent.
When you need to acquire data
from a target endpoint, you’re always working against the clock. For example,
in a malware attack minutes can be difference between thousands of customer
records being lost or protected if the initial point of compromise is
discovered fast enough.
With AXIOM Cyber 4.6, we’ve improved the performance of remote acquisitions saving you valuable time. While individual network conditions and environments may yield varying results, based on internal testing, we consistently observed up to four times faster remote collections. For example, when we tested the remote collection of a single 5GB file, it only took six minutes to collect with AXIOM Cyber 4.6 versus 26 minutes using a previous version.
New in AXIOM & AXIOM Cyber: Customize Your Portable Cases
Clear communication of your case findings is critical. With AXIOM and AXIOM Cyber 4.6, you can now tailor your Portable Case to include the most relevant information for your stakeholders with these new customization features:
Choose which case artifacts are included
Blur or block categorized media (only available in AXIOM)
You also have the option to save your preferences
as templates to help you quickly customize your other Portable Cases.
New in AXIOM & AXIOM Cyber: Get More from Your Geolocation Data
Now with AXIOM and Cyber 4.6,
you can export your geolocation data in a KML file, which you can then analyze
using Google Earth or another external GIS program of your choice to obtain
additional insights.
Check out this video from Tarah Melton to see how to export your geolocation data with AXIOM:
Other New Enhancements in Magnet AXIOM &
Magnet AXIOM Cyber
Acquire the contents of a user’s MEGA account
when provided with their credentials
Android 11 Quick Imaging support
New Artifacts
Facebook Messenger – Audio and Voice Messages
(iOS, Android)
Google Calendar Reminders (iOS)
Google Docs (iOS)
Google Drive (iOS)
Google Drive Offline Files (iOS)
Google Sheets (iOS)
Google Slides (iOS)
Photos Media Information (macOS/iOS)
PowerLog (macOS)
Rebuilt Desktop (macOS)
Artifact Updates
Application Permissions (iOS 14)
Big Sur (version 11.0) (macOS)
Calendar Events (Windows, macOS)
Duck Duck Go (iOS)
Ecosia (iOS)
EML(X) Files (Windows)
Firefox (Android)
Google Meet (Android)
Google Photos Albums (iOS)
iMaps (iOS 14)
iMessage (iOS 14)
Instagram Posts (Android)
Lyft (iOS)
Mac Mail (macOS)
Outlook (Windows)
PowerLog (iOS)
Signal (iOS)
Slack (Android)
Twitter Users (iOS)
Uber (Android)
WhatsApp Messages (Android)
WiFi Profiles (Android)
Windows Mail (Windows)
Get Magnet AXIOM 4.6 and Magnet AXIOM Cyber 4.6 Today!
If you’re already using AXIOM, download AXIOM 4.6 or AXIOM Cyber 4.6 over at the Customer Portal. If you want to try AXIOM 4.6 or AXIOM Cyber 4.6 for yourself, request a free trial today.
Earlier this month, we launched Magnet OUTRIDER 2.0, which includes a whole host of new artifact support as well as faster scans compared to earlier versions of OUTRIDER.
In this blog, we’ll dig into the performance gains customers are seeing as well as review the new artifacts and features included in this release. If you’d like to try Magnet OUTRIDER, request a free 30-day trial license here and for those who currently own OUTRIDER, make sure to update today!
Find Evidence Even Faster
Whether your triaging devices while in the field and conducting a search warrant or back in the lab, time is of the essence. The word triage is derived from the French Trier meaning “separate out” while triage is from the early 18th century which is defined as “the action of sorting items according to quality”. With OUTRIDER our goal is simply that; quickly sort through the digital evidence and find what matters for your investigations. While previous versions of OUTRIDER performed lightning fast scanning for identified apps and CSAM utilizing ChildRescue Coalition technology, we’ve found based on internal testing a 30% improvement in speed while still capturing more data with OUTRIDER 2.0.
My results on testing between OUTRIDER versions 1.7 and 2.0
are listed below.
Live Machine Specs: i9 Processor, 32 GB RAM, 2 TB Internal OS Drive
Scans were conducted on a total of 5 TB worth of storage
687,000+ files / folders scanned
63,100+ files analyzed with CRC CSAM Detection
Source Location
OUTRIDER 1.7
OUTRIDER 2.0
Externally Connected SSD via USB 3
71 seconds
41 seconds
Run from Internal OS Drive
65 seconds
38 seconds
New Features
The U.S.-based nonprofit the National Center for Missing & Exploited Children (NCMEC) said it had recorded a 106% increase in CyberTipline reports of suspected child sexual exploitation—rising from 983,734 reports in March 2019 to 2,027,520 in the same month this year.
Now available in OUTRIDER 2.0, and quite possibly my favorite new feature in this release is the ability to import NCMEC CyberTip reports for OUTRIDER to find matching hits on. Loading a NCMEC CyberTip is easy, simply select “Import NCMEC CyberTip” from the bottom of the OUTRIDER 2.0 user interface and then select the file, as seen below.
The NCMEC CyberTip matches can include IP addresses, filename matches, and web browser internet history matches as you can see the image below. IP addresses imported from a NCMEC CyberTip report will also be used to alert you if an imported IP address matches the current external IP address for the live system being scanned.
Live System Artifact Collection
Also new to OUTRIDER 2.0 is the ability to acquire (very
quickly) operating system artifacts from a live target system.
New ArtifactsInclude
USB Device History
Recently Accessed Files
Mapped Network Drives
Prefetch Files
Extended Drive Info
Firewall Info
Installed Apps
IP Info
Logged on Users
Network Connections
Operating System Info
Running Processes
Scheduled Tasks
User Accounts
WiFi Info
WiFi Saved Passwords
Window Services
New capabilities of OUTRIDER 2.0 also include the ability to
capture a screenshot of the target device as well as RAM collection. RAM
collected with OUTRIDER can easily be ingested into Magnet AXIOM for further
analysis.
Quick Tip: Use the new WiFi Saved Password Lists for other
encrypted devices.
As people are creatures of habit, I especially appreciate
the new WiFi Saved Passwords artifact.
Within seconds an examiner can have a potentially extensive
password list for use while unlocking other encrypted contents during their
investigation, based on the list provided from this artifact.
Identify More Apps
We’ve also included additional identified apps within this release of OUTRIDER. New app categories include VPN, Messaging and Games.
VPN Apps
Messaging Apps
Game Apps
Surfshark
Skype
Roblox
NordVPN
WhatsApp,
Minecraft
Hotspot Shield
Facebook Messenger
Fortnite
StrongVPN
Viber
TunnelBear
Slack
VyprVPN
Telegram
Windscribe
LINE
KeepSolid VPN
Unlimited
WeChat
IPVanish VPN
Discord
Private
Internet Access
Signal
CyberGhost VPN
Pidgin
Riot
Teams
KakaoTalk
Wire
Wickr
Between the new app categories, faster scans, NCMEC CyberTip ingestion, and faster scan times, the new OUTRIDER 2.0 update is a fantastic improvement to an already critical tool in investigator’s digital tool box.
As an additional bonus, Jad has personally added an “Easter Egg” in OUTRIDER 2.0 which can be found from the screen shot previewed above, good luck! The first 5 participates who find the surprise and email me at trey.amick@magnetforensics.com will receive some Magnet swag!
We’ve just added Forensic Fundamentals (AX100) as an online self-paced training course. AX100 takes a deep look at the basics of digital forensics; however, the course is anything but basic. We start our study of digital forensics preparing to do an examination and progress through learning how data may be different in various file systems.
Online Self-Paced Training is one of the learning delivery methods at Magnet Forensics that allows students to experience the same training material at their own location and at their own pace. OSP is a great way to still participate in training without having to physically travel to a training destination.
In addition to there being no need for travel and flexible timing, the greatest benefit of taking AX100 in the OSP environment is that students can progress through the material with timing that works best for them. If there are concepts or modules that need additional time to take notes or to understand, the training can be paused and replayed to reinforce concepts. The OSP offering of AX100 provides students with their own learning experience with the content.
New and seasoned examiners alike can benefit from this course. For beginners, the content covered may be all or mostly new. For examiners who have been doing the job for a while, this a great refresher on the concepts at the very core of digital forensics. The information covered in this course helps to complete an understanding of the fundamentals that are essential for examiners of all levels.
You can sign-up for Forensics Fundamentals (AX100) Online Self-Paced at training.magnetforensics.com. We hope to see you in the Magnet Forensics Training classroom soon!
Love it or hate it, the cloud is here to stay. Regardless of if it’s for individual use, a Fortune 500 company, or a government entity, everyone uses the cloud in one way or another at this point. In this series, we will explore the journey organizations will work through as they consider migrating to the cloud, diving into not only the benefits, but also the difficulties that need to be addressed as part of the migration to the cloud.
Above all else, this series will keep you from being this guy…
The use of public Cloud infrastructure has skyrocketed in the past couple of years with the market leaders being Amazon Web Services, Microsoft Azure, and Google Cloud. As we can see in the market breakdown below, provided by Statistica, AWS has the largest share of the Public Cloud Market at 33% as of Q2 of this year.
While organizations are used to working within traditional on-prem, physical data centers we want to first understand some of the key differences as to why many have looked to migrate to the cloud versus continuing to grow their on-prem solutions. While there are always hesitations to making such a drastic change to your business, many find Cloud Computing worth the trial and tribulations. We’ll hear later in this series from Cloud Security Engineer, Rick Whittington as he recounts his transition from an on-prem environment to the cloud.
Types of Cloud Computing Services
When it comes to cloud computing services, users have several factors and options to weigh when it comes to selecting the right service for their business needs. Let’s take a look at the different types of cloud computing services from most configurable to least.
We could use your cloud types (IaaS > PaaS > SaaS) and core concepts (Elasticity, Reliability, Scalability, Agility) you’re outlining below but this gives a summary/overview in a simple visual way. But the above just gives an example of what I think we can do to easily visualize it and give the reader an idea of what’s to come.
Infrastructure as a Service (IaaS)
The first service, which yields the most flexibility and adaptability for an enterprise solution is Infrastructure as a Service (IaaS). IaaS is regarded as the most comprehensive of the different cloud computing services since it essentially provides a virtualized infrastructure for an organization to build on. With IaaS deployments, businesses manage it all, from software and OS’s that are installed, down to specific needs for their organization without interference from the cloud provider. Examples of IaaS would be Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Compute Engine.
Platform as a Service (PaaS)
The Platform as a Service (PaaS) is similar to IaaS in that it allows for some adaptability, in regards to building out tools needed for the needs of the business, however it comes pre-configured typically with an OS installed and the basic framework for operating in the cloud. PaaS is great for decentralized teams who may all need to access the same tools for application development since it’s cloud / browser based. Examples of PaaS would include Microsoft Azure and Google App Engine.
Software as a Service (SaaS)
Lastly, we have Software as a Service (SaaS) which provides a developed software solution to meet a particular need for the user via a subscription service. SaaS solutions include all the necessary infrastructure, operating systems, and data without the need of the customer having to configure each before utilizing the software. SaaS solutions are quick to stand up, letting organizations scale very quickly based on their needs. Examples of SaaS based solutions would be Microsoft Office 365, Slack, Google Apps, and Dropbox.
I feel like there is a transition missing here… for example a Heading1 titled “Cloud Core Concepts” with a brief paragraph something like:
Let’s now look at some of the core concepts underlying cloud architecture. Choosing the right cloud option is ultimately dependent on your need for each of these so understanding them and how they affect your decision is important.
Scalability
Organizations
always have to plan accordingly to scale their operations. With traditional,
on-prem data centers this can be a timely and expensive endeavor. If an
organization wants to grow into a new market, the process of locating and
acquiring a suitable physical location for the data center, ordering of all the
hardware, delivery, initial set-up, and hiring / training of personnel to staff
the new facility can take many months to execute on.
While this has been the standard for business operations in the past, the use of public cloud infrastructure makes scalability for organizations much more fluid, which also allows for not only cost savings with the initial time savings of growing the business, it also keeps organizations nimble when it comes to managing the needs of the business. Instead of attending countless meetings where the team discusses the logistics of scaling an on-prem datacenter to a new location, using a cloud infrastructure the operations team can grow with the business with a few clicks of the mouse.
Elasticity
Another core concept of cloud computing is Elasticity. The notion of cloud elasticity can easily be related to the elastic nature of a rubber band, in that the band shrinks and expands based on the current needs, just like the needs of an organization as well; the resources being used within cloud environments can shrink or grow depending on the current business requirements.
Reliability
Let’s face it, most people hate change. Especially big changes that disrupt current workflows. Common idioms such as, the old way is reliable, and I know how to fix it when it breaks, or better the devil you know than you than the devil you don’t seem to be popular when it comes to individuals or corporations looking to implement new technology.
Many organizations have business continuity plans as part of the NIST Cyber Security Framework (more information here), Information Protection Processes and Procedures (PR.IP) for if /when a data center or other information security technology goes down. For cross referencing, the PR.IP for business continuity plans is PR.IP-9.
Migrating operations to a cloud environment provides organizations maximum reliability due to redundancies and separate regions for fail-overs. This takes the burden off managing on-prem data centers that may have a short power outage or worse yet, a natural disaster that devastates the region. For some, a data center that goes offline for an extended period can have negative repercussions on that business for many months into the future.
Agility
Being agile is important for any business, regardless of the industry you’re in. When it comes to On-Prem solutions versus Cloud and being agile, Cloud has many advantages for users.
The first
advantage Cloud computing has when it comes to being agile is it reduces the
time required to maintain the infrastructure your company depends on to run
smoothly. Maintenance of the Cloud infrastructure is serviced by the vendors
(AWS & Azure), versus your organization. Instead of working on upkeep of
the infrastructure, focus can instead be on how to add value to the
organization.
Another advantage to utilizing the Cloud is around trying new things as a company. With On-Prem solutions, many months of planning has to be coordinated with various stakeholders of the organization before anything came to market, however with Cloud, a business can deploy a new solution much faster than they ever could before, giving more opportunities to try innovative ideas. To go alongside the notion of new solutions, organizations utilizing the cloud can often implement newer technology must faster and cheaper than if they were working towards a similar integration with their On-Prem solution.
Conclusion
In this series,
we’ll explore many aspects of getting started in the cloud, as well as the
security of your cloud environment, and lastly how to investigate the cloud
when the need arises. We’ll hear later in this series from Cloud Security
Engineer, Rick Whittington as he recounts his transition from an on-prem
environment to the cloud., and will shed light on some of the common
misconceptions of utilizing cloud infrastructure and how best to secure it.
If your organization has already migrated to AWS or AZURE and you find yourself needing to investigate these platforms make sure to check out AXIOM Cyber, which allows for analysis of S3, EC2, and Azure virtual machines.
We’re excited to be able to offer our brand-new product, currently known as Project Turbo for beta testers in the Magnet Idea Lab.
Triaging endpoints and preparing clients reports can be time consuming. Turbo is a cloud-based early case assessment tool that enables quick remote triage of a computer to uncover potential data exfiltration, uncover key artifacts for incident response cases and enable rapid identification of computers that require further investigation.
Turbo has been built specifically based on feedback we have received from digital forensics professionals performing in a consultancy role. We’ve heard how difficult remote early case assessments are to complete because client visits are costly and site visits are challenge in the current conditions.
A Time Sensitive Solution for the Urgent Need for Triage
Consultants often need to obtain targeted information quickly so clients can determine if further assessment is needed on employee systems. Early Case Assessment should be fast and take significantly less time to review than traditional deep dive forensics and yet must be forensically sound to ensure the rationale to dig deeper is warranted.
We understand the urgency consultants face to provide triage results to clients. The days of having a week or even 24 hours to provide an assessment are long gone. There is a strong need to be able to quickly triage endpoints without requiring an on-site visit. Consultants have asked us for a method to triage, review and share results with clients without having to sort through a full collection.
Magnet Idea Lab is a Space Dedicated to Solving Customer Problems
Magnet Forensics itself started out with a similar mentality. Our Founder (and CTO), Jad Saliba, developed IEF as a way to address the real problems he faced as a law enforcement officer. Thanks to generous initial feedback from the community, he was truly able to turn it into a product that made a big difference in very important investigations.
Now, our initial product offerings have expanded to solve the needs of corporate clients and consultants. The spirit of problem solving underpins everything we do at Magnet. Earlier this year, we launched the Magnet Idea Lab to create innovative products while involving customers directly and help address the problems they’re facing.
Our initial beta testers have shared that Project Turbo has helped with the following use cases:
Employee exits and outboarding
Suspicious computer activity
IT Policy compliance
Rapid review of incident response artifacts
We would love to hear your feedback on this product to ensure we are building something that solves real problems, and to help you serve your clients faster and more efficiently.
Apply now to the Magnet Idea Lab to access a free trial of Project Turbo.
In this series, Rick Whittington will explore the benefits and potential risks of the Cloud for organizations. Rick will incorporate knowledge he’s gained as a reformed Network Engineer with multiple disciplines in Network, Security, Global Networks, Datacenter, Campus Networks, and Cloud Networks. He’ll also incorporate his years of experience in improving enterprise infrastructure, processes, and teams. Rick has brought his cloud experience to organizations such as Capital One, Charles Schwab and his current position as a Sr. Security Engineer for a large data analytics company.
These objections still ring in my ears from
many years ago, when I was a Datacenter Architect and I found myself fighting
the business to adopt cloud infrastructure. I was arguing with business units
on why our (my) datacenter was more equipped to handle the business’s
requirements and its growing needs.
Coming from a Cisco background, and having spent many years in security, I couldn’t fathom myself allowing our data to be shipped to “someone else’s computer!”.
A Game-Changing Project Led Me to the Cloud
Then, one day I needed to work on an
architecture that required a segmented environment. For a group of developers
to work on a new business application, it was going to be “game changing”. The
System Architect and I sat down in front of floor-to-ceiling white boards and
began to discuss our game plan.
However, we quickly encountered problems:
Server stock was low across the
nation, creating high lead times
Proposed network utilization would
have degraded user experience
Upgrades to infrastructure were
required
Going home that night and going through the
requirements, I thought about the problems, weighed possible solutions, and
tried to come up with alternatives. It was at this point I decided that I
needed to research the Cloud to solve this problem. But how would I tackle
security?
Today there are many solutions to accommodate
security. They include an endless supply of vendors that will help you migrate
to the Cloud and plenty of cloud environments to choose from.
Based on my journey to the Cloud many years ago, I’ll describe the five things that convinced me that the Cloud was in fact more secure than my datacenter.
1. Availability
Availability can mean a number of things, such
as accessibility or redundancy. The meanings take on a different usage based on
the context of the conversation and subject. For example, availability can
often mean:
99.999% Uptime
Site Redundancy
Application Access
Failover
However, while availability is often paired
with logical constructs, we often forget that availability can also be tied to
purchasing tangible assets. When managing a traditional physical presence,
availability takes a different tone when attempting to purchase actual
hardware, provisioning networking access, and allocate space, all before an
application is ever turned on. Purchasing availability is often overlooked, but
can lead to project delays, sub-optimal selections, or partial setups, costing
the organization in the long run.
A recent scenario of laptop shortage during the current pandemic highlights hardware availability. While remote access was trivial to provision, many organizations struggled to find assets to deploy for workers to transition to a remote employee. This perfect storm caused overall hardware availability issues and impacts to the organization due to loss of overall productivity. Furthermore, usage of datacenter access for many organizations, required additional increase to Internet access, creating availability issues in the form of lead times to deploy and user experience.
How Can the Cloud Help Me with Availability?
Cloud providers consume the same resources marketed and used by their customers. Due to this, providers ensure maximum availability, not only for themselves, but their customers as well. Organizations no longer have to maintain physical assets for customer or internal facing applications. Internet and inter-network connectivity limitations are removed by the seemingly enormous amounts of bandwidth, with no lead time to adopt for additional usage. Storage and servers are abound and easily provisioned with common OSs, and licensing baked into the cost, once again creating availability without concerns for hardware and software delays or shortages. With no concerns on physical assets, engineering teams can instead focus on designs meeting overall business objectives. Availability in cloud means organizations as a whole focus on access and resiliency of applications.
2. Data Access
Data access consists of physical and logical
access. Many data breaches are a direct result, unfortunately, of
misconfiguration or improper handling, such as:
Hard drive removal and disposal
Overly permissive firewall rules
and user permissions
Malicious users
Organizations spend large sums of money on physical access controls and often substantially more on logical access controls. Properly securing data challenges all organization sizes, however, small to medium sized organizations often have the most difficulties. Organizations with small IT Teams may run flat internal networks, with no internal security measures, and overly permissive user access to minimize impact to users. Data leak events cause severe problems for organizations. They lead to loss of confidence from the public, potential fines, legal repercussions, and in worse case scenarios complete closure of the business
How Can the Cloud Help with Data Access?
Physical Access
Physical access to the Cloud datacenters is
heavily restricted and monitored. Even if an attacker were to somehow obtain
access to one of the datacenters, cloud providers obscure the data across
multiple drives, while also encrypting it at rest.
From a physical
perspective, the data is often more secure by obfuscation than a traditional
datacenter, where data is stored in dedicated SANs for a particular company.
Along with this, physical access to cloud
provider data centers are often governed by strict rules, physical security
measures, and potential regulations and legalities. In short: cloud datacenters
are like fortresses. While physical access to the data is not impossible, but
highly improbable for outside parties not employed directly with the cloud
provider.
Logical Access
While most issues with the Cloud come from
data leakage at a logical layer, it is often due to configuration errors or
assignment of overly permissive identity and access policies. Cloud providers
have started to provide more tooling, better security monitoring, and
instituted a “deny by default approach”.
While cloud providers usually approach the
customer with the mentality of “we provide the mechanisms, you implement them”,
the mentality has changed over the course of the past several years due to
large breaches. Example options like encryption at rest and access
restrictions, are implemented by default.
Additionally, many cloud providers now provide
a security risk review to evaluate your posture and provide base security
recommendations. Ultimately securing data in the Cloud is all in how the
configuration is handled by the implementation, there are far more options
available than a traditional on-premise infrastructure.
How Do I Keep My
Data More Secure?
Each of the cloud providers offer multiple
logical ways to access the data you store in the Cloud. In a later discussion,
I will talk about some of the basic options to check for in Azure, AWS, and
GCP, along with how cloud storage differs from on-premise storage.
However, to keep your data more secure, cloud
providers offer many options including:
Private endpoints accessible from
within your account
Identity and access management
policies to provide role-based access restrictions
Individual policies for
‘“folders’” access
Defaults for encryption, logging,
and backup of data
Each of these options implemented provide a very well- rounded data access and retention capability that rival traditional storage vendors.
3. Logging and Monitoring
While logging and monitoring is a normal security requirement in any organization, generally the ease of implementation and usage is often not.
Depending on the age of the infrastructure,
engineers are often shoehorning in monitoring into key choke points and
critical server infrastructure. Furthermore, sending this data back to a
central logging point often utilizes network resources and can tax the already
existing infrastructure.
Of course, this only matters if you are
monitoring and logging…you know just in case there is something nefarious that
happens.
However, logging and monitoring is more than a mechanism to ensure you have no nefarious actors within the walls, it is also used for capacity planning, troubleshooting, and meeting regulatory and compliance needs. Yet, the infrastructure and tools needed to accomplish this often exceed operational budgets and capacities. Furthermore, who’s going to monitor all this data and filter out the noise?
How Can the Cloud Help with Logging & Monitoring?
Unlike traditional data centers, cloud logging
and monitoring is generally a small configuration checkbox that can be selected
at any time. Monitoring is also broken out into both network, monitoring (think
Netflow), application monitoring, and identity and access.
This fundamentally makes it easy to implement
anytime and anywhere. Logs captured by the different cloud providers are often
sent to local storage within the account for review and parsing. With the focus
shifted from implementation, the problem now becomes operational.
How do you ensure reliability, redundancy, and
usage of the data? If you recall earlier, cloud providers offer availability
and redundancy by default, especially within their cloud storage. But what
about using the data to provide actionable intelligence for operational
security teams? Enter in solutions like AWS Guard Duty and Azure Advanced
Threat Protection.
The solutions, when activated, monitor the
traffic within your account and provide basic alerts for known attacks and
known malicious traffic patterns. With the usage of AI within the providers,
further operational overhead in processing and review can be lessened.
Lastly, all major cloud providers offer an alerting infrastructure that can be easily configured for each of the logging domains. All of this at a cost organizations would often pay for licensing and hardware.
What Can I Do for Greater Visibility Into My Cloud Infrastructure?
As discussed above, enabling some of the basic
logging features within each cloud provider provides great visibility into your
cloud infrastructure. In a later article, I will discuss additional details of
each option available to you, and potential methods for implementation and
usage.
However, basic implementation of monitoring, with services such as AWS Guard Duty and Azure Advanced Threat Protection, are great things to research for added insights in common security related attacks, such as bitcoin mining traffic.
4. Failing Fast
This concept took a long time for me to really
grasp. Coming from a traditional datacenter, I was taught to treat my
infrastructure as if it was my child.
To make any changes to the infrastructure
required many changes, approvals, loads of testing, sometimes purchasing new
equipment, and once it went in, it was not coming out. This left prototyping
new solutions to business problems rather complex.
It was a Waterfall methodology,
and it was inefficient—especially when things didn’t go exactly according to
plan.
What is Failing
Fast with Cloud?
While I can’t promise the Cloud solves the
bureaucracy problem, I can say that prototyping and R&D are better solved
within cloud environments. The focus shifted away from hardware purchases, lead
times, shipping delays, and dealing with sales teams.
Many solutions are now often available within
the provider’s market place to purchase and use. The cost of trial is also by
the hour, with licensing built in, and only being charged while the solution is
running.
The infrastructure where the prototype is
deployed can also be highly segmented into different accounts to prevent
potential conflicts. This provides limitless possibilities and encourages an
engineering team to find the right solution for the business within a segmented
environment.
No longer will the business need to wait on
many of the blockers that are encountered in traditional infrastructures,
allowing faster adoption of new services.
How Can I Fail
Fast Securely?
While on traditional infrastructure,
everything is considered production, regardless if the data being used is not
production. However, in the Cloud, I have found that having an R&D account
allows for a segmentation I could never have afforded within my traditional
infrastructure.
Using a dedicated account, I can use
developmental data to prototype solutions acquired from the market place. I can
easily create mock topologies using applications similar to what I would have
in a production environment, all without impacting customer or business
operations.
To limit any potential exposure, I treat the overall environment as ephemeral, and limit access to specific source nodes or VPN. All of this has allowed me to prototype solutions from vendors that I never would have in the past, while providing better security posture to the organizations I have worked for.
5. Perimeter Security and Internal Security
I left this purposely for last, mainly due to
the complexities of this subject, regardless of location of implementation.
Within traditional environments, network security is often strong on the
perimeter, while lacking security internally.
From flat networks to port forwards, bad
actors have many avenues to attack an organization. Often external attacks can
be attributed to an opened port for a development server to be accessed in
testing. Often due to complexity, proper segmentation and zero-trust methodologies
are not applied, leaving networks vulnerable.
In fact, many vendors promise to solve this
issue for you, and multiple solutions can be layered to solve for this.
However, ultimately the balance between providing access on a production
network that services both production and development workloads creates an
inherent risk.
Here’s an example of this: a previous
financial firm I was employed with had a change performed for development
purposes on a set of load-balancers. These load-balancers serviced both
development and production workloads, in particular e-mail services. The change
inadvertently allowed outside users to access the exchange server hosted
internally without being on VPN. Had this been a development server, an
attacker could have compromised and pivoted within the network, attacking
multiple nodes.
We have all read data breach stories like this
with regards to the attacker exploiting a compromise of one server and it
leading to others. At the end of the day this hurts confidence and trust, it
impacts morale, and can be costly with potentially devastating effects.
So How Does the
Cloud Solve This Challenge?
To understand how this is solved in the cloud
environment, we must first understand why the issue exists on traditional
networks. The biggest reason for this is due to cost and maintenance of
infrastructure.
Why would you pay for a separate development
and production network, and all associated assets? Multiple firewalls, routers,
switches, and server infrastructure are all costly, not including the ISP
circuits and internet access to replicate the connectivity.
Vendors have been introducing more
virtualization into the mix to try and rectify this issue of logical
segmentation, while allowing for the maximum usage of the purchased asset.
Nothing solves the entire problem, yet. But what if all the infrastructure was
abstracted, and the cost came to just compute, storage, and bandwidth
consumption?
This is how the Cloud solves this issue! The
main network and server connectivity layer is handled by the IaaS, allowing for
focus on better security practices of segmentation of development and
production data and workloads. Meaning that in a properly architected
environment, an unpatched development server compromise does not impact production,
and when development data is used, no customer data is leaked.
How Can I Segment
Better in the Cloud?
The answer to this is highly dependent on
which cloud provider you choose to use. However, my rule of thumb is to create
an account- based segmentation. By utilizing accounts built for particular
purposes, the chances of a misconfiguration within the account compromising
another account with different data is minimized.
A development account will inherently be
segmented from a production account and its data. For example, let’s say we
have Account A and Account B. By separating Accounts, privilege escalation
attacks within Account A do not impact Account B or lead to a leak in Account
B. If Account A is compromised, you could completely delete Account A, with no
impact to account B.
Furthermore, this will enable flexibility of
development challenges, while still maintaining strict security in production
level accounts. Again, no physical network can provide this without first
incurring the cost of the actual infrastructure. To me this was the single most
impactful quality that made transitioning to the Cloud for our datacenter
infrastructure an easier justification.
The Last Words…
The five reasons we just covered is what
answered that burning question for me: “what makes the Cloud more secure than my datacenter?”
I hope that by describing them in detail and
how the Cloud offers benefits over a traditional environment, it can help you
answer that burning question too.
In the coming weeks and months, I’ll be
continuing to share my cloud security experience with you. Hopefully some of
the insight that I share will help you explore the benefits for your
organization and see the implementation of security best practices to minimize
potential risks.
We’re proud to announce that Magnet Forensics has become a member of the WePROTECT Global Alliance (WPGA) — a group dedicated to ending online child sexual exploitation and abuse.
Joining a consortium of likeminded members, including 98 governments, 41 global technology companies, and 44 leading civil society organizations, Magnet Forensics will bring its expertise in the development of digital forensics technologies and training curriculum to the fold.
Our Founder & CTO, Jad Saliba, says: “WePROTECT are doing extremely important work in the global fight to eradicate child sexual exploitation online. Combating these heinous crimes against children have been a motivating factor for the team at Magnet Forensics since our founding and we look forward to partnering with WePROTECT Global Alliance members to improve national and global investigations that ultimately lead to a safer world for children.”
The first month of the #MagnetWeeklyCTF has come and gone! For the month of October, challenges were posed each week to engage forensic examiners on an Android mobile device image. It has been a blast to see how everyone approaches each question in their own way and being able to interact with players on social media and our Magnet Forensics Discord Server! Be sure to join our Discord here for more opportunities to ask questions and earn points! For details of how to join the Magnet Weekly CTF and how to earn additional bonus points, such as from writing up blogs or custom artifacts, read here!
As of November 6, with some already completing the first challenge of the new month, we have an impressive top 5!
Place
Username
Score
1
JoDoSa
445
2
Excelsior!
405
3
svch0st
325
3
peterms
325
4
korstiaan
305
JoDoSa will receive a prize pack for winning the first month of the competition but remember the #MagnetWeeklyCTF is based on a cumulative score, so there’s still plenty of time to play and capture the number 1 spot! Trey has already begun working on the final grand prize (challenge), that you will definitely want a shot solving! Stay tuned!
Each week in October, a CTF participant was also randomly chosen for a prize. Congrats to the following players for each week who will get some MagSwag! We’ll be reaching out via the email you registered with to confirm your details for shipping!
Week 1: swanticket
Week 2: Hoktar
Week 3: Forensicator
Week 4: hmc6721
Amazing job goes to all of our players and challenge solvers! We hope you are enjoying the Weekly CTF and continue to test your skills through the rest of 2020 as we mix things up with various image types and forensic challenges! Even more surprises await! Now, let’s take a look back week by week to highlight each question and some key results from the month of October.
Week 1
We started off strong with the first challenge written by Jad Saliba, Founder and CTO of Magnet Forensics. Jad’s question was as follows:
What time (in UTC) was the file that maps names to IP’s recently accessed?
A: 03/05/2020 05:50:18
Kicking off the month, we had dozens of correct submissions! It has been awesome to see all the DFIR community engagement since week 1, with tons of great blog posts of your solves of each challenge as they come. All of the write ups have been a fantastic insight into multiple approaches to solve the same problem. An excellent example in Week 1 was able to detail how to solve the question using only command line!
Week 2
The second week’s Android question was brought to you by Tarah Melton, Forensic Consultant, which read:
What domain was most recently viewed via an app that has picture-in-picture capability?
A: malliesae.com
Our week 2 challenge yielded many different approaches examiners took to the problem. One approach was to use the Recent Tasks and Snapshot artifacts found in Android devices, as demonstrated in our solve here as well as this webinar comparing artifacts between Android and Google Takeout data. The Snapshot artifact was utilized here as well, and we also found numerous other blogs to write up even different methodology, as exampled here which utilized Alexis Brignoni’s tool ALEAPP!
Week 3
Week 3 brought a bit more of a challenge from Jessica Hyde, Director of Forensics, with users only having 3 attempts to gain 40 points! Many were still able to find the flag, and this week resulted in tons of learning for everyone playing!
Week 3 read:
Which exit did the device user pass by that could have been taken for Cargo?
A: E16
The first hint was to review the webinar comparing iOS and Android artifacts, where a method to reveal the answer was highlighted. You also had the option this week to use a hint which read “MVIMG” but also would cost you 20 points! The answer to this challenge could be found by carving out an MP4 file from an Android moving image, which contained a frame displaying the exit E16 for the flag. There were many creative approaches to solving this question, and some even detailed various witty wordplays like S. Cargo (or escargot?) as seen here! Another creative approach to highlight was viewing the moving image on a Google Pixel device itself! Read about it here!
Week 4
The final challenge for Android October was written by Trey Amick, Manager of Forensic Consultants, and offered a clever Android finale detailing the device owner’s interest in “phishing.” The question stated:
Chester likes to be organized with his busy schedule. Global Unique Identifiers change often, just like his schedule but sometimes Chester enjoys phishing. What was the original GUID for his phishing expedition?
A: 7605cc68-8ef3-4274-b6c2-4a9d26acabf1
The GUID could be found relating to the Evernote application, but take notice that the question was looking for the original GUID, not the current one. This trickery may have been a bit of a curve ball, but that didn’t slow everyone down much! One interesting blog written for this week’s challenge can be found here, where the correct answer was only arrived at after exhausting the 3 try limit. We even were able to read about this solve in Spanish as well!
Last but definitely not least, there were four new custom artifacts added to the Artifact Exchange in the Magnet Forensics Resource Center to be shared with the community. These custom artifacts, written by CTF players who were awarded a whopping 50 extra bonus points per artifact selection, can be downloaded along with all the other custom artifacts available to be used in your AXIOM case processing. Here are some details about these new artifacts!
SOLID EXPLORER 2 DB (ANDROID) – Joshua James, joshua@dfirscience.org
Solid Explorer is an Android file management app inspired by the old school file commander applications (http://neatbytes.com/solidexplorer/). This artifact is the local database for Solid Explorer 2 that shows file access and associated times in Unix ms.
MOTION VIDEOS (ANDROID) – Kevin Rode, kevin.rode@mymail.champlain.edu
Motion Videos were Android’s answer to Apple’s Live Photos. They are stored as a jpg file with an embedded mp4. This artifact will carve out the embedded mp4 so that it can be easily viewed
BASH HISTORY V2 (COMPUTER/MOBILE) – Kevin Pagano, stark4n6@gmail.com
An updated version of Jessica Hyde’s Bash History parser, which now includes Mobile. It parses the “.bash_history” file and lists out the executed commands.
GOOGLE CALENDAR (ANDROID) – Joshua James, joshua@dfirscience.org
Android Google Calendar app SQLite database containing calendar settings including the user account and sync time.
With the month of October and the Android image in the rearview mirror, we hope that you enjoyed these challenges! November kicked off a new image focusing on Linux forensics, so be sure to join in on the fun and test your skills! Thank you to all of our players, bloggers, and custom artifact writers! We’ll check back in at the end of November with more challenges and winners from whatever the next month brings!
With Magnet AUTOMATE 2.5 (and AUTOMATE 2.7, coming soon in early 2021), we’re introducing new stats & management dashboards. The new dashboards feature key lab insights and metrics delivered in easy to consume visual dashboards to help you drive success and efficiency in the lab.
Visibility into your digital forensics
lab’s stats and metrics enable lab managers to:
Report on the ROI and value of your investment
in AUTOMATE by tracking overall lab throughput and efficiency metrics
Assess lab operations to help you make
data-backed decisions regarding resource allocation, hardware & software
procurement and strategic hiring decisions
Quickly Assess Lab
and Case Status on the Homepage Case Dashboard
The AUTOMATE homepage
gets an upgrade in v2.5 with an enhanced Homepage Case Dashboard. After logging in, gain an at-a-glance
understanding of lab throughput and infrastructure health so that you can quickly
assess which cases and nodes need immediate attention and why.
The Homepage Case
Dashboard now features visual modules that report on:
Notifications Module: An instant view of which cases require immediate attention. This table summarizes what cases are currently pending additional details or were unsuccessfully processed and require manual intervention.
Node Status: Provides a detailed breakdown of which nodes are online, offline, or currently in use as well as their available disk space so an examiner knows when they need to procure more drive space for a machine.
Case Status: Shows the current processing status of the lab including how many cases are processing, failed, and pending. The seven-day totals show approximately how many cases successfully and unsuccessfully completed processing, giving the lab manager quick stats that they can report on.
The enhanced Homepage Case Dashboard in AUTOMATE 2.5.
Drive Lab Improvement and Planning with
a New Overview Dashboard
Coming soon, an all new Overview Dashboard provides
lab managers with a high-level
overview of lab operations and efficiency metrics to drive lab improvement and resourcing
decisions. This new dashboard will allow you to assess overall data throughput, workflow usage,
cases in progress and the number of evidence sources processed.
There are six modules featured in this dashboard, with key metrics including:
Workflow Usage: Lab managers can use this information to show how often a workflow is used. This will give the lab manager the ability to track trends around how often a workflow is used (or not) to help them identify efficiencies in the lab, as well as what skills may be required if they’re planning to hire new examiners.
Data Throughput: Shows how much data has been processed in AUTOMATE. You’re able to configure this to show data processed over a period of time (daily, weekly, monthly, annually).
Total Number of Evidence Sources Processed: Shows how many evidence sources AUTOMATE has processed. You’re able to configure this to show the total over a period of time (daily, weekly, monthly, annually).
Caption: the new Overview Dashboard coming soon in early 2021.
Integration with Magnet AXIOM 4.6
Additionally, AUTOMATE 2.5 integrates AXIOM 4.6 introducing new artifacts that help you get to your evidence faster. Check out our AXIOM 4.6 blog post to learn more about the new artifacts and features we introduced.
In this series, Rick Whittington will explore the benefits and potential risks of the Cloud for organizations. Rick will incorporate knowledge he’s gained as a reformed Network Engineer with multiple disciplines in Network, Security, Global Networks, Datacenter, Campus Networks, and Cloud Networks. He’ll also incorporate his years of experience in improving enterprise infrastructure, processes, and teams. Rick has brought his cloud experience to organizations such as Capital One, Charles Schwab and his current position as a Sr. Security Engineer for a large data analytics company.
Depending on the Google search, cloud storage can bring up multiple definitions for this seemingly basic term. Further complicating matters is if the service is an Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS) solution. Let’s define Cloud storage as “A server to store or retrieve remote files.”, the differentiator being the level of management of the remote server. Services like AWS S3, Azure Blob, and Dropbox are great examples of blurring the lines of ‘…as a Service’. Below is a breakdown of several solutions and how they relate to ‘as a Service’:
IaaS
Summary
Bare metal solution requiring selection of components, such as RAM, CPU, Storage, and software, is installed and managed.
Significant control in the configuration of overall deployment.
AWS
Elastic Block Storage (EBS)
Elastic File System (EFS)
Azure
Disk Storage
PaaS
Summary
The provider has already selected hardware and software.
Focus on the development and management of applications over infrastructure control.
With the many cloud storage options provided to organizations, it can be challenging to secure these solutions properly. The threats with on-premise storage and cloud storage can be different, but with the same data leakage and exfiltration outcomes, leading to a loss of confidence in the organization. For this article’s purpose, the focus will be on securing AWS S3 and Azure Blob Storage.
What is Object Storage?
Object storage differs from traditional file system storage and block storage in how the data is managed. Data managed in file systems are stored in a hierarchy, while block storage stores data as blocks within sectors and tracks. Object storage differs in managing the data as an object that contains the data, a globally unique identifier, and metadata, which can work with both structured and unstructured data. By providing globally unique identifiers, data replication and data distribution can be at an individual object level across many hardware systems. With the focus on data as objects, object storage provides an organization with the benefit of granular controls.
Common Security Issues
The risk associated with data storage varies in its deployment and use case. The common threat is data exfiltration or leakage; the risk of exposure within an on-premise data center is generally minimized within the organization network boundaries. Data stored in the Cloud has no defined network boundary leaving the inherent risk of being accessible globally. Most, if not all, security events involving cloud storage relate to permission or policy misconfiguration. While misconfiguration is the typical cause of data leakage, organizations should look at a comprehensive security approach to manage overall risk, such as:
Encryption
Management and Discovery
Remediation
Monitoring
Encryption
Many cloud providers offer server-side encryption as a recommended setting, with some providers using server-side encryption as a default configuration. Server-side encryption provides encryption of data when received by the destination, leaving the data, at-rest, encrypted. Although the data is encrypted at-rest, misconfigurations in permissions will leave information accessible. Using client-side encryption provides the most significant security benefit, decreasing the risk that a leakage event will yield usable data. Most organizational security focuses on external threat actors; however organizations should also consider malicious internal users. Using client-side encryption allows for the additional benefit of data privacy, both internally and externally, by encrypting the file’s actual contents.
Centralized management of assets in cloud environments is highly dependent on both the provider capabilities and the organizational deployment architecture. Many leverage multi-account architectures with providers such as AWS, treating each account as a unique environment. With individual accounts, both AWS and Azure provide compliance reporting to assist with managing and discovering misconfigurations. Unfortunately, in a multi-account architecture, centralized management of configurations such as permissions can become problematic. Both vendors and open-source developers have created solutions to ease the burden of managing and discovering misconfigurations across accounts.
The discovery of storage objects is critical to understanding the organizations cloud footprint. While the management of existing infrastructure can be accomplished in many ways, discovering new storage objects is vital to ensure that Organizational policies are applied to mitigate misconfigurations at creation. In addition to discovering new storage objects, organizations must review the object’s contents and use appropriate data classification to understand where critical data is stored. With data classification policies applied, files containing sensitive data can be further protected with additional permissions, encryption, or tokenization.
On-premise storage misconfiguration remediation traditionally exposes data internally, leaving the potential for risk to internal users. On-premise file-sharing solutions can be centrally managed, allowing for rapid remediation of permissions or misconfigurations. In contrast, and as previously stated, a misconfiguration within a cloud environment inherently reveals the data globally. With object storage, individual objects can leverage different permissions; the lack of centralized management and manually reconfiguring objects can be time-consuming. To lower the risk of data exposure at scale, organizations need to implement automated remediation. External threat actors leverage automation to discover misconfigured cloud storage; organizations choosing not to leverage automation to provide corrective action have a higher chance of exposure and loss of confidence from the public.
Monitoring for cloud storage can help determine who accessed the file object and audit any object permissions changes—in the event of data exposure, determining when the permissions were changed and list unauthorized access of the file. Configuration of access logs is completed at an object level, with configuration auditing at a service level. Monitoring, coupled with data classification, can aid in understanding the severity of the data leakage event. With the addition of configuration auditing, potential intent can be established.
With permissions and policies applied to cloud storage as the primary method of protecting against unauthorized access, it is critical to understand how to implement access control properly. Whether using AWS S3 or Azure Blob Storage, both utilize Identity and Access Management as a foundation. Identity and Access Management (IAM) can be applied to an entity (user or resource), giving the entity permission to access individual resources or complete services. In using access control to cloud storage, permissions may be granted to all objects or individual objects. A best practice in creating IAM Policies applied to resources is to specify explicit permissions and objects that can be accessed. In contrast, this may be difficult with user accounts as the user may be creating the initial storage object. However, specific permissions can be applied to user accounts to create limitations as well. The purpose of explicitly defining these permissions is to prevent a compromised resource from accessing non-authorized storage objects.
In conjunction with Identity and Access Management applied to users and resources, storage objects also have additional policies and permissions that can be set. In looking at the permission capability of objects, the capability is generally read and/or write, and in the case of AWS, can be further granted to a grantee (public, authenticated user, account). By adding a storage bucket policy, granular permissions can be defined, such as:
By specifying an IAM Role, an administrator can define individual permissions for access further. If an IAM Role allows full permissions to a storage bucket or object, the bucket’s restrictions will supersede role permissions.
Grant Access to individual IAM Roles
Like permissions, further specification of which roles are allowed access to a Storage Bucket lower the risk that an overly provisioned role may access sensitive data.
Conditionals
Conditionals allow additional criteria to be specified to access the storage objects. Examples of conditionals are source account, resource ID, or IP address.
While generic permissions may look easy to implement, the risk of misconfiguration and exposure is much greater. The best practice is to utilize bucket policies and define access granular as possible.
One last consideration for securing access to cloud storage is the use of private endpoints. By utilizing private storage endpoints, resources within the cloud network environment layer (compute resources) can access storage resources without being exposed to the public internet. In using a private connection, risks such as man-in-the-middle attacks can be reduced since all data communication stays within the local network environment.
When considering cloud storage, maximize overall risk with a layered security approach. Using a combination of encryption, auditing, management, automated remediation, and granular permissions will overall minimize data exposure. In our next installment of this blog series we’ll dig into cloud virtual machines and instances, so stay tuned!
We know that the ability to support the latest and greatest mobile devices and operating systems is critical in your investigations. This fall saw the release of iOS14 and Android 11, and we’ve been hard at work updating our support for both so you can be confident you’re getting the most evidence from your mobile sources!
We take a look at some of the new mobile features and artifacts included in the latest versions of AXIOM and AXIOM Cyber below.
Once you’ve obtained access to a device, time is of the essence, so quickly collecting the relevant data is critical.
AXIOM’s Quick Imaging feature helps you to collect as much information as possible from a mobile device, as quickly as possible, so that you can start examining the evidence right away. A quick image is a comprehensive logical image that contains both user data and some native application data.
Quick Imaging is supported on the latest versions of iOS and Android. For iOS devices, AXIOM can obtain a quick image from devices running version 5.0 and later. For Android, quick images can be obtained from devices running version 2.1 and later. See this video from Tarah Melton for a quick demo of Quick Imaging for Android:
Artifacts
With AXIOM and AXIOM Cyber 4.7, our full complement of iOS and Android artifacts has been updated to support iOS14 and Android 11.
Included with these are several new artifacts to help you get even more from your mobile images.
Google Apps for iOS
Google productivity apps have been gaining popularity as an alternative to Microsoft Office. AXIOM and AXIOM Cyber support parsing evidence from several Google productivity apps for iOS, including Google Docs, Drive, Slides, and Sheets.
With AXIOM and AXIOM Cyber 4.7, we’ve also added new iOS artifacts for Google Photos Media and Google Photos Album, helping you collect and analyze even more potential picture evidence!
Android Motion Photos
A Motion Photo is a short video automatically captured before and after taking a still picture, similar to an iOS Live Photo. Users can leverage this feature to select the best still picture frame or view/share the video itself. AXIOM can now recover Motion Photo artifacts so you can easily add them to your case!
New Custom Artifacts Added to Artifact Exchange
In addition to the updates above, we’re pleased to highlight some new custom mobile artifacts added to our Artifact Exchange by you in our community! These three new custom artifacts, written by players in our Magnet Weekly CTF Challenge, include:
Solid Explorer is an Android file management app inspired by the old school file commander applications (http://neatbytes.com/solidexplorer/). This artifact is the local database for Solid Explorer 2 that shows file access and associated times in Unix ms.
An updated version of Jessica Hyde’s Bash History parser, which now includes Mobile. It parses the “.bash_history” file and lists out the executed commands.
Magnet AXIOM 4.7 and Magnet AXIOM Cyber 4.7 are now available—upgrade today in-product or over at the Customer Portal.
With AXIOM and AXIOM Cyber 4.7, we’re helping ensure you can support the latest iOS and Android devices with iOS14 Quick Imaging and updated artifacts for Android 11.
Find out more about these new features, along with new and updated artifact support below.
New in AXIOM & AXIOM Cyber: Support the Latest iOS and Android Devices
We know that the ability to support the latest and greatest mobile devices and operating systems is critical in your investigations. With AXIOM and AXIOM Cyber 4.7, we’ve updated our support for both iOS and Android to so you can be confident you’re getting the most evidence from you mobile sources.
With our previous release, we updated our iOS artifacts to support iOS14. AXIOM and AXIOM Cyber 4.7 now include iOS14 Quick Imaging, helping get you as much information as possible from new and upgraded Apple devices, as quickly as possible, so that you can start examining the evidence right away.
Following up on our support for Android 11 Quick Imaging in AXIOM and AXIOM Cyber 4.6, we’ve also updated our full complement of artifacts to support devices running Android 11, including a new artifact for Android Motion Photos.
Other New Enhancements in Magnet AXIOM & Magnet AXIOM Cyber
Improved Export of O365 and Gmail to PST
Enhancements to Mega.NZ acquisition
New Artifacts
Event Logs – User Events (Windows)
Google Photos Album (iOS)
Google Photos Media (iOS)
Motion Photos (Android)
Artifact Updates
AirDrop Outgoing Transfers (iOS)
Aloha Browser (Android)
Device Information (Android)
Discord Messages (Android)
Dropbox (Windows)
EML(X) Files (Windows)
Event Logs (Windows)
Facebook Messages (Android & iOS)
Houseparty (Android & iOS)
iMessage Messages (MacOS)
Instagram Messages (Android & iOS)
Internet Explorer Daily History (Windows)
QuickLook Thumbnails (MacOS)
Rebuilt Desktop (macOS)
Reddit (Android)
Shellbags (Windows)
Signal Local User (Android)
Signal Group Members (Android)
Signal Messages (Android)
Snapchat Chat Messages (Android)
Snapchat Group Members (Android)
TextMe (Android)
Uber Cached Locations (Android)
Usage History (Android)
Wickr (Android)
Get Magnet AXIOM 4.7 and Magnet AXIOM Cyber 4.7 Today!
If you’re already using AXIOM, download AXIOM 4.7 or AXIOM Cyber 4.7 over at the Customer Portal. If you want to try AXIOM 4.7 or AXIOM Cyber 4.7 for yourself, request a free trial today.