Welcome!

@DevOpsSummit Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Aruna Ravichandran

Related Topics: @DevOpsSummit, Java IoT, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

'How I Wrote My First Useful Node Project' By @MarkoBonaci | @DevOpsSummit #DevOps

It was a warm Thursday, sometime in May 2015...

Growing a Beard (or "How I Wrote My First Useful Node Project")
By Marko Bonaći

Stage setting: Camera is positioned above the treetop of one of three tall poplars. It looks down on the terrace of a pub. It’s evening, but there’s still enough light to see that the terrace is sparsely populated.

Camera slowly moves down towards a specific table in the corner…

As the camera moves down, an old, crummy typewriter font appears on the screen, typing with distinct sound. It spells:

May 2015, somewhere in Germany…

The frame shows four adult males seating at the table. They sip their beers slowly, except for one of them. The camera focuses on him, as he hits a large German 1 liter pint in just two takes. On the table there’s a visible difference in the number of empty beer mugs in front of him and others. After a short silence, the heavy drinker says: (quickly, like he’s afraid that someone’s going to interrupt him, with facial expression like he’s in a confession):

“I still use grep to search through logs”.

As the sentence hit the eardrums of his buddies, a loud sound of overwhelming surprise involuntarily leaves their mouths. They notice that it made every guest turn to their table and the terrace fell into complete silence. The oldest one amongst them reacts quickly, like he wants no one to hear what he just heard, he turns towards the rest of the terrace and makes the hand waving motion, signaling that everything is fine. The sound of small talk and “excellent” German jokes once again permeates the terrace.

He, in fact, very well knew that it isn’t all fine. A burning desire to right this wrong grew somewhere deep within his chest. Camera focuses on this gentleman and starts to come increasingly closer to his chest.  When it hits the chest, {FX start} the camera enters inside, beneath the ribs. We see his heart pumping wildly. Camera goes even deeper, and enters the heart’s atrium, where we see buckets of blood leaving to quickly replenish the rest of the body in this moment of great need {FX end}.

The camera frame closes to a single point in the center the screen.

A couple of weeks later, we see a middle aged Croatian in his kitchen, whistling some unrecognized song while making Nescafe Creme and a secret Croatian vitamin drink called Cedevita.

Now camera shows him sitting at his desk and focuses on his face, “en face”.

He begins to tell his story…

“It was a warm Thursday, sometime in May 2015. My first week at Sematext was coming to end. I still remember, I was doing some local, on-ramping work, nothing remotely critical, when my boss asked me to leave everything aside. He had a new and exciting project for me. He allegedly found out that even the biggest proponent of centralized log management, Sematext, hides a person who still uses SSH+grep in its ranks.

The task was to design and implement an application that would let Logsene users access their logs from the command line (L-CLI from now on). I mentioned in my Sematext job interview that, besides Apache Spark (which was to be my main responsibility), I’d like to work with Node.js, if the opportunity presented itself. And here it was…”

What is Logsene?
Good thing you asked. Let me give you a bit of context, in case you don’t know what Logsene is. Logsene is a web application that’s used to find your way through piles of log messages. Our customers send us huge amounts of log messages, which are then collected into one of our Elasticsearch clusters (hereinafter ES). The system (built entirely out of open source components) is basically processing logs in near-real-time, so after the logs are safely stored and indexed in ES, they are immediately visible in Logsene. Here’s what the Logsene UI looks like:

Logsene_3

See those two large fields in the figure above? One for search query and the other for time range? Yes? Well, that was basically what my application needed to provide, only instead of web UI, users would use command-line interface.

So what was the problem? Why am I telling you all this, you might be wondering. Well, I was not exactly what you would call a Node expert at the time. I was more of a Node fanboy than anything else. I mean yes, I was following developments in the Node.js community almost from the start and I desperately wanted to work with it, but, in spite of all my “Node evangelism” at my previous workplace, I was not able to push through a single Node project, and thus I did not have any Node applications in production, and that’s how you gain the right kind of experience. I only played with it in my free time.

MB

The Start Was Rough
On top of all that, I purchased a lovely Macbook Pro just a couple of weeks ago and having previously worked only on Windows and Ubuntu, I needed to google in order to find solutions for most trivial stuff, especially improvements I wanted to make to my development workflow.

Which IDE do I choose for developing in node? Which text editor do I use for a lighter stuff?

I previously tried Brackets and Atom and I had a feeling that they were both somehow better suited for front end stuff. I also used WebStorm previously, so I downloaded a trial version to check out what juicy improvements have been added since I last used it. None, as far as I could tell. But once I started using it, there was no going back. Though Node support could certainly be substantially better (the only Node project template in WebStorm is “Express app”).

Those were all sweet worries, and I enjoyed every step of it, but then it quickly came time to actually do something useful.

How are CLI applications even developed in Node? I found a couple of articles and one of them led me to this package called Ronin, which is like a small framework for developing CLI applications. It allows you to structure your shell commands by simply using folder hierarchy.

CLI-post_ronin

So, looking at the figure above, we have the following commands:

logsene search

logsene config get

logsene config set

In order for a function to be considered a command, it needs to extend the base object ronin.Command, like this:

CLI_post_2

The run function is executed when the client invokes the command. In this example, logsene search would trigger the run command you see in the figure above.

Ronin also has this useful notion of a middleware, which avoids code duplication by providing a function (placed in the middleware folder) that you “use” in your command definition (the figure above shows that the search command uses two middleware functions, session and auth). All middleware functions that you “use” are going to be executed, in the order that you list them, before your command’s main, run function.

Besides the run function, there’s also a help function that is simply used to output command’s usage information (e.g. logsene search –help).

CLI_post_3

The First Setback
The first real setback I had was a feature that required that different people, using a shared OS user and a shared Sematext account at the same time, being SSHd into the same box.  Many organizations, in order to simplify maintenance, have this kind of “single gateway” setup behind their firewalls.

The problem was that in order to facilitate one of the most important requirements — the ability to pipe search output to other *nix commands (e.g. grep, awk, cut, sed, tail, less, …) — each command needed to be its own, new Node process, i.e. the Node process starts and ends with a single command.

logsene search ERROR | awk '$4 ~/kern/'

[The command searches last hour (default) of logs for the term ERROR and pipes the output to awk, which prints out only those log entries that originated from the kernel.]

In other words, it wasn’t possible to use my own REPL, with my own logsene> prompt, where each user’s session would simply be bound to the duration of the node process, i.e. the REPL. In that case, handling a user session would be as simple as writing and reading environment variables. The problem is that environment variables, written from a Node process (or any other process, for that matter) live only as long as the process that wrote them lives. When the process exits, environment variables that the process created are purged from the environment.

That disqualified environment variables as the means of session persistence across multiple commands.

The whole point of the session was to spare users the trouble of logging in with each new command they execute. I needed a way to establish a session environment that would be uniquely identifiable and bound to a specific user session, from multiple, serially executed node processes, i.e., commands.

I searched far and wide through all of them: npm (what a terrible search experience), nipster (better), npmsearch, node-modules (not bad), nodejsmodules (which forces https, but doesn’t even have a proper SSL certificate), gitreview, cnpmjs and libraries.io but was not able to find anything related to my problem. I decided that, from then on, I was going to use Google (possibly sometimes nipster and node-modules) to search for packages.

The other thing that this “package searching” experience has shown is that not a single search solution out there was designed by a search engine expert (no filters, no facets, no date ranges, no combining tags with queries, not even result sorting,  …). Really terrible. I hope to convince my boss that we donate some time to the Node community and put up a proper search engine. Imagine if you could search for a term and get results together with facets, which would allow you to further filter the results on author, number of dependent modules, recent usage, number of GH stars, date of the last commit, exclude packages without GH repo, …

It was a long road and in the end it again turned out that the simplest solutions are the best ones. I used the fact that all SSH sessions have unique SSH_TTY environment variable, even if the same OS user is used by different SSH sessions. So I decided to use Linux username to identify user that’s physically working at the workstation and SSH_TTY to identify SSHd users, no matter how many of them are SSHd to the box in parallel. This insight was the turning point, as it made possible to solve the whole problem with a single line of code (using configstore NPM package):

var conf = new Configstore('logsene' + (process.env.SSH_TTY || '/' + osenv.user()));

Configstore is a package (used by Yeoman, amongst others) to persist information inside user’s home/.config directory, in configuration file whose relative path and name you provide as parameter.

To illustrate how that works, imagine there are currently four users working with L-CLI, all using the same OSX user account, named mbo. One of them is physically sitting at the box, while other three are SSHd into it.

CLI_post_4

Looking at the Configstore command above, since user sitting at the box doesn’t have SSH_TTY environment variable, she is assigned with the following config file:

Looking at the Configstore command above, since user sitting at the box doesn’t have SSH_TTY environment variable, she is assigned with the following config file:

.config/configstore/logsene/logsene-mbo.json

Second user’s SSH_TTY environment variable is set to /dev/ttys007 so he’s assigned with:

.config/configstore/logsene/dev/ttys007.json

Analogous to the second user, third and fourth users have SSH_TTY set to /dev/ttys008 and /dev/ttys009, respectively.

So this is how it works:

  1. user enters search command
  2. L-CLI tries to locate user’s configuration file, using the method I just described
  3. if the file is not found or if more than 30m have passed since user’s last command:
    1. ask the user to login and use that information to retrieve the API key
    2. ask the user to choose Logsene app that she wants to work with (if the API key has more than one Logsene app) and retrieve the chosen app’s APP key
    3. store username,  API key, APP key and session timestamp to user’s conf file
  4. update session timestamp
  5. display search results

OK, that solved the “session problem”.

Streams FTW
Ever since streams became widespread in Node, I’ve looked at them as something magical. It was unbelievable to see how a couple of pipe() functions can make the code terse and transform the way you think about your code. Streams are built on top of EventEmitter, which provides pub-sub primitives within an application. Piping is just a convenience mechanism that ties output of a readable stream (data source) to input of a writable stream (data destination) and handles all the nasty wiring and stream throttling bits for you (the latter is used to alleviate the problem that occurs when data rate from the source is faster than the processing rate of destination, so the source stream is asked to stop sending any more data until told otherwise by the destination stream).

HTTP requests (used to fetch results from the Logsene API server) are basically streams. stdout (think of it as console.log), where search results should eventually be written to, is a stream. So why would I interrupt those nice streaming interfaces with something ugly as promises in the middle? Moreover, it’s a much better experience if log entries start appearing almost immediately after you fire search, than to wait until the whole result set is buffered just to get a potentially huge output splat all over your terminal, all at once. I yelled “streams all the way down” and got to work.

Since I have never worked with streams for real before, I first needed to re-educate myself a bit. Substacks’s Stream handbook is still an excellent stream learning resource. But before starting with that, you should perhaps first go through NodeSchool Stream Adventure course.

Anyway, Logsene search API fully supports Elasticsearch client libraries, so to stream-enable the whole search experience I ended up using elasticsearch-streams package, which provides streaming facilities on top of the official elasticsearch library. You could say that the package is emulating streaming by offsetting the retrieved portion of the result set (e.g., 1-50 hits in first request, 51-100 in second, …). Unless you dug a bit deeper, you wouldn’t even notice the difference. By utilizing streams, Indexing becomes simply a writeable stream and searching, of course, a readable stream. Here I also used Dominick Tarr’s JSONStream package, that lets you manipulate streams of JSON data in the functional way. E.g. it allows you to map over JSON objects, to filter them on the fly, as they pass near you. And that’s exactly what I needed, since JSON is not a particularly useful for representing log entries to users. For that, I needed something more table-like, so I transformed JSON to TSV, following my colleague Stefan’s advice.

Right about this point, while working with streams, something clicked in my head. I cannot explain what exactly happened but somehow everything fell into place. I started feeling much more confident with Node. I mean, not that I was ever worried that I wouldn’t be able to do the job, but you know those moments when you think of a thing you still need to do (but you still don’t know how), then of another and in a matter of seconds your mental to-do list becomes so large that you cannot help feeling a bit overwhelmed. The experience had taught me that the best thing to do in those situations is to calm down, stabilize your breathing pattern and go back to your actual to-do list. Go over it and add any new things that you just thought of. Look at this as a way to better the product you’re working on, not as something that should be dwelled upon or be hindered by.

From this point on, it was a breeze. I did more in the following three days than the previous two weeks. Node has just spread out to me, ready to be gulped away. And that’s a great feeling!

OK, enough of the human story (that’s how my boss refers to my writing; either that or he just says essay instead of blog post), let’s see what this puppy can do…

-t flies

After two weeks of development I only had basic set of functionalities, where you could login and do a regular relevancy-based ES search.

MB2

At this point, my attention was drawn to the fact that normal, relevancy based search, when used by itself, doesn’t play well with log entries.

If you think about it, that actually makes sense. When a user wants to check her logs she is primarily interested in some specific time range (in fact, if a user just opens the Logsene web application, without entering a search query, she gets the last hour of logs). Only after log entries are filtered to include only some specific time range, user needs to be able to enter a query that is used against that time range. You would rarely (if ever) want to search the whole log history, but even then, you’re still speaking in time ranges.

So I needed to provide time range filtering functionality. I chose to use popular Moment.js library to help me with date-time, time duration and time range parsing.

Let me explain what I mean by date-time, duration and range:

  • date-time is simply a timestamp, e.g. -t 2016-06-24T18:42:36
  • duration is a length of time, represented with units of time: e.g. -t 1y8M8d8h8m8s
  • range is a provisional object that has start and end timestamps

Range is what we are really after. Every -t parameter must eventually yield a date-time range. The following table shows how that calculation is done, depending on the value of the -t parameter that the user has provided.

CLI_post5

Note that the default range separator is forward slash (standardized by ISO-8601). This can be customized with the –sep parameter.

Duration is basically a date-time modifier, which either operates on some specific date-time (i.e. timestamp) or on the current time.  E.g. -t 5 will subtract 5 minutes from the current time and return all entries since the resulting date-time. It would yield a range query, which would only have gte (greater than or equal) boundary.

Also note that, when you use duration in the second position in a range expression, it has to start with either minus or plus sign that immediately follows forward slash (like in the 3rd, 4th, 6th and 7th example in the table1). That tells L-CLI whether to add duration to the start or to subtract duration from the start.

Let Me Show You a Few Examples
When a user issues her first L-CLI command (or if more than 30 minutes have passed since her last command), she is asked to login. After a successful login, she is prompted to choose a Logsene application that she wants to work with (if she has more than one, of course).

After that, the command is executed.

Let’s say that a user installed L-CLI and she starts it for the first time, by issuing the simplest possible command, logsene search (without any parameters). L-CLI first asks the user to log in. Upon successful login, she is asked to choose the Logsene application that she wants to query. Now L-CLI has all the prerequisites to start doing it’s main job, searching logs. Since no parameters where given, L-CLI returns the last hour of log entries (default maximum number of returned log entries is 200, which can be adjusted with the -s parameter).

logsene search

CLI_post_6

When a live session exists, L-CLI already has all required user’s information, so there’s no need to login:

logsene search

CLI_post_7

To turn on tracing (used only for troubleshooting, since it messes up the TSV output):

logsene config set --trace [true]

CLI_post_8

Simple search with tracing turned on:

logsene search

CLI_post_9

Supply timestamp to start the search from (returns log entries in the range whose lower bound is the timestamp and upper bound is not specified):

logsene search -t 2015-06-25T23:06:32

CLI_post_10

Search for documents that contain both terms, response and took.

Default operator is OR, which can be overridden with either -op AND or  just –and:

logsene search response took -op AND

CLI_post_11

Search for documents that contain phrase ”triggered DYLD shared region” and were logged during the last 5 minutes.

logsene search “triggered DYLD shared region” -t 5m

CLI_post_12

Search for documents that were created between 9h ago and 8h 55m ago.

logsene search -t 9h/+5m

CLI_post_13

Search for documents that were created between 9h 5m ago and 9 h ago.

logsene search -t 9h/-5m

CLI_post_!4

Search for documents that contain either or both phrases; that were created between last Sunday at 06:00 and now (morning is translated to 06:00). Return up to 300 results, instead of default 200.

logsene search "signature that validated" "signature is valid" -t "last Sunday morning" -s 300

CLI_post_16

Here’s how the previous request looks like as it’s being shipped to Logsene (after being translated by the elastic.js helper library):

Code2

Usage: logsene search [query] [OPTIONS]

where OPTIONS may be:

-q <query>      Query string (-q parameter can be omitted)

-op AND         OPTIONAL Overrides default OR operator between multiple terms in a query

-t <interval>   OPTIONAL ISO 8601 datetime or duration or time range

-s <size>       OPTIONAL Number of matches to return. Defaults to 200

-o <offset>     OPTIONAL Number of matches to skip from the beginning. Defaults to 0

–json           OPTIONAL Returns JSON instead of TSV

–sep            OPTIONAL Sets the separator between two datetimes when specifying time range

Examples:

logsene search

returns last 1h of log entries

logsene search -q ERROR

returns last 1h of log entries that contain the term ERROR

logsene search UNDEFINED SEGFAULT

returns last 1h of log entries that have either of the terms

note: default operator is OR

logsene search SEGFAULT Segmentation -op AND

returns last 1h of log entries that have both terms

note: convenience parameter –and has the same effect

logsene search -q “Server not responding”

returns last 1h of log entries that contain the given phrase

logsene search “rare thing” -t 1y8M4d8h30m2s

returns all the log entries that contain the phrase “rare thing” reaching back to

1 year 8 months 4 days 8 hours 30 minutes and 2 seconds

note: when specifying duration, any datetime designator character can be omitted

(shown in the following two examples)

note: months must be specified with uppercase M (distinction from minutes)

note: minutes (m) are the default must be specified with uppercase M (distinction from minutes)

logsene search -t 1h30m

returns all the log entries from the last 1.5h

logsene search -t 90

equivalent to the previous example (default time unit is minute)

logsene search -t 2015-06-20T20:48

returns all the log entries that were logged after the provided datetime

note: allowed formats listed at the bottom of this help message

logsene search -t “2015-06-20 20:28″

returns all the log entries that were logged after the provided datetime

note: if a parameter contains spaces, it must be enclosed in quotes

logsene search -t 2015-06-16T22:27:41/2015-06-18T22:27:41

returns all the log entries that were logged between the two provided timestamps

note: date range must either contain forward slash between datetimes,

or a different range separator must be specified (shown in the next example)

logsene search -t “2015-06-16T22:27:41 TO 2015-06-18T22:27:41″ –sep ” TO “

same as previous command, except it sets the custom string separator that denotes a range

note: default separator is the forward slash (as per ISO-8601)

note: if a parameter contains spaces, it must be enclosed in quotes

logsene search -t “last Friday at 13/last Friday at 13:30″

it is also possible to use “human language” to designate datetime

note: it may be used only in place of datetime. Expressing range is not allowed

(e.g. “last friday between 12 and 14″ is not allowed)

note: may yield unpredictable datetime values

logsene search -q ERROR -s 20

returns at most 20 latest log entries (within the last hour) with the term ERROR

logsene search ERROR -s 50 -o 20

returns chronologically sorted hits 21st to 71st (offset=20)

note: default sort order is ascending (for convenience – latest on the bottom)

logsene search –help

outputs this usage information

Allowed datetime formats:

YYYY[-]MM[-]DD[T][HH[:MM[:SS]]]

e.g.

‘YYYY-MM-DD’

‘YYYY-MM-DD HH:mm’

‘YYYY-MM-DDTHH:mm’

‘YYYYMMDD’

‘YYYY-MM-DD HHmm’

‘YYYYMMDD HH:mm’

‘YYYYMMDD HHmm’

‘YYYYMMDDHHmm’

‘YYYYMMDDHH:mm’

‘YYYY-MM-DDTHHmm’

‘YYYYMMDDTHH:mm’

‘YYYYMMDDTHHmm’

‘YYYYMMDDTHH:mm’

‘YYYY-MM-DD HH:mm:ss’

‘YYYY-MM-DD HHmmss’

‘YYYY-MM-DDTHH:mm:ss’

‘YYYY-MM-DDTHHmmss’

‘YYYYMMDDHHmmss’

‘YYYYMMDDTHHmmss’

note: to use UTC instead of local time, append Z to datetime

note: all datetime components are optional except date (YYYY, MM and DD)

If not specified, component defaults to its lowest possible value

note: date part may be separated from time by T (ISO-8601), space or nothing at all

Allowed duration format:

[Ny][NM][Nd][Nh][Nm][Ns]

e.g.

1y2M8d22h8m48s

note: uppercase M must be used for months, lowercase m for minutes

note: if only a number is specified, it defaults to minutes

Allowed range formats

range can be expressed in all datetime/duration combinations:

datetime/datetime

datetime/{+|-}duration

duration/{+|-}duration

duration/datetime

where / is default range separator string and + or – sign is duration designator

The following table shows how ranges are calculated, given the different input parameters

CLI_post_17

note: all allowable date/time formats are also permitted when specifying ranges

note: disallowed range separators:

Y, y, M, D, d, H, h, m, S, s, -, +, P, p, T, t

Allowed “human” formats:

10 minutes ago

yesterday at 12:30pm

last night (night becomes 19:00)

last month

last friday at 2pm

3 hours ago

2 weeks ago at 17

wednesday 2 weeks ago

2 months ago

last week saturday morning (morning becomes 06:00)

note: “human” format can be used instead of date-time

note: it is not possible to express duration with “human” format (e.g., “from 2 to 3 this morning”)

note: it is recommended to avoid human format, as it may yield unexpected results

Usage: logsene config set [OPTIONS]

where OPTIONS may be:

–api-key <apiKey>

–app-key <appKey>

–range-separator <sep>

–trace <true|false>

It is not necessary to explicitly set api-key nor app-key.

Logsene CLI will ask you to log in and choose Logsene application

if keys are missing from the configuration

Examples:

logsene config set –api-key 11111111-1111-1111-1111-111111111111

sets the api key for the current session

logsene config set –app-key 22222222-2222-2222-2222-222222222222

sets Logsene application key for the current session

logsene config set –range-separator TO

sets default separator of two datetimes for time ranges (default is /, as per ISO6801)

logsene config set –trace [true]

activates tracing for the current session (true can be omitted)

logsene config set –trace false

deactivates tracing for the current session

Usage: logsene config get [OPTION]

Where OPTION may be:

–api-key

–app-key

–app-name

–range-separator (used to separate two datetimes when specifying time range)

–trace

–all (return listing of all params from the current user’s session)

Fork, yeah!
So, you see, there’s no need to use grep on your logs any more. Once your logs are shipped and securely stored in Logsene (which happens in real-time), simply use its little cousin, Logsene CLI. No configuration, no rummaging through servers manually, over SSH.

You can try Logsene CLI even if you don’t already have a Sematext account.

Getting a free, 30-day trial account is as simple as it gets! You’ll be set in less than 15 minutes to start playing with Logsene CLI. We won’t ask you for your credit card information (it’s not needed for trial account, so why would we).

Signup_graphic

The source code can be found on GitHub.

Please ping us back with your impressions, comments, suggestions, … anything really.   You can also reach us on Twitter @sematext.  And we would be exceptionally glad if you filed an issue or submitted a pull request on GitHub.

Filed under: Logging Tagged: logging, logs, logsene

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

@DevOpsSummit Stories
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making real-time decisions based on a combination of real user monitoring, synthetic testing, APM, NGINX / local load balancers, and other data sources, is critical.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's application ecosystem, tweaking existing software to stitch various components together leads to sub-optimal solutions. This definitely deserves a re-think, and paves the way for a new breed of lightweight application servers that are micro-services and DevOps ready!
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp empowers global organizations to unleash the full potential of their data to expand customer touchpoints, foster greater innovation and optimize their operations.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This leads to a waste of cloud resources and increased operational overhead.
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers to house increasing amounts of storage infrastructure.
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with legacy components, and also go over automated capabilities provided by operators to auto-update Kubernetes with zero downtime for current and secure deployments.
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its start in 2008 with a mission to use fast, flash-based storage in the most efficient, effective manner possible. What the team had discovered was a technology that optimized storage resources and reduced dependencies on sprawling storage installations. Launched as the Avere OS, this advanced file system not only boosted performance within standard, on-premises, network-attached storage systems but ...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security facilities, allowing customers to focus on results while minimizing capital equipment investment.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implement always-vigilant DNS security
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using new techniques such as feature flagging, rollouts, and traffic splitting, experimentation is no longer just the future for marketing teams, it’s quickly becoming an essential practice for high-performing development teams as well.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and business benefits that cloud bursting provides, including increased compute capacity, lower IT investment, financial agility, and, ultimately, faster time-to-market.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.