=================================================== liblognorm 2.0.5.master: tests/test-suite.log =================================================== # TOTAL: 117 # PASS: 91 # SKIP: 0 # XFAIL: 0 # FAIL: 26 # XPASS: 0 # ERROR: 0 .. contents:: :depth: 2 FAIL: parser_whitespace.sh ========================== Using valgrind: no =============================================================================== [parser_whitespace.sh]: test for whitespace in parser definition Out: { "num": "0x1234 in hex form" } JSONs weren't equal. Expected: {"num": "0x1234"} Actual: { "num": "0x1234 in hex form" } FAIL parser_whitespace.sh (exit status: 1) FAIL: parser_LF.sh ================== Using valgrind: no =============================================================================== [parser_LF.sh]: test for LF in parser definition Out: { "num": "0x1234 in hex form" } JSONs weren't equal. Expected: {"num": "0x1234"} Actual: { "num": "0x1234 in hex form" } FAIL parser_LF.sh (exit status: 1) FAIL: field_string.sh ===================== Using valgrind: no =============================================================================== [field_string.sh]: test for string syntax Out: { "f": "test b" } JSONs weren't equal. Expected: {"f": "test"} Actual: { "f": "test b" } FAIL field_string.sh (exit status: 1) FAIL: field_string_perm_chars.sh ================================ Using valgrind: no =============================================================================== [field_string_perm_chars.sh]: test for string type with permitted chars Out: { "f": "abc b" } JSONs weren't equal. Expected: {"f": "abc"} Actual: { "f": "abc b" } FAIL field_string_perm_chars.sh (exit status: 1) FAIL: field_whitespace.sh ========================= Using valgrind: no =============================================================================== [field_whitespace.sh]: test for whitespace parser Out: { "b": "word2", "a": "word1 word2" } JSONs weren't equal. Expected: { "b": "word2", "a": "word1" } Actual: { "b": "word2", "a": "word1 word2" } FAIL field_whitespace.sh (exit status: 1) FAIL: rule_last_str_long.sh =========================== Using valgrind: no =============================================================================== [rule_last_str_long.sh]: test for multiple formats including string (see also: rule_last_str_short.sh) Out: { "string": "string" } Out: { "string": "string" } Out: { "string": "string after" } Out: { "string": "string after" } Out: { "string": "string", "string": "string middle string" } JSONs weren't equal. Expected: {"string": "string" } Actual: { "string": "string", "string": "string middle string" } FAIL rule_last_str_long.sh (exit status: 1) FAIL: field_whitespace_jsoncnf.sh ================================= Using valgrind: no =============================================================================== [field_whitespace_jsoncnf.sh]: test for whitespace parser Out: { "b": "word2", "a": "word1 word2" } JSONs weren't equal. Expected: { "b": "word2", "a": "word1" } Actual: { "b": "word2", "a": "word1 word2" } FAIL field_whitespace_jsoncnf.sh (exit status: 1) FAIL: field_float-fmt_number.sh =============================== Using valgrind: no =============================================================================== [field_float-fmt_number.sh]: test for float field Out: { "num": 15.9 in floating pt form } JSONs weren't equal. Expected: {"num": 15.9} Actual: { "num": 15.9 in floating pt form } FAIL field_float-fmt_number.sh (exit status: 1) FAIL: field_hexnumber_v1.sh =========================== Using valgrind: no =============================================================================== [field_hexnumber_v1.sh]: test for hexnumber field Out: { "num": "0x1234 in hex form" } JSONs weren't equal. Expected: {"num": "0x1234"} Actual: { "num": "0x1234 in hex form" } FAIL field_hexnumber_v1.sh (exit status: 1) FAIL: field_kernel_timestamp_v1.sh ================================== Using valgrind: no =============================================================================== [field_kernel_timestamp_v1.sh]: test for kernel timestamp parser Out: { "timestamp": "[12345.123456] end" } JSONs weren't equal. Expected: { "timestamp": "[12345.123456]"} Actual: { "timestamp": "[12345.123456] end" } FAIL field_kernel_timestamp_v1.sh (exit status: 1) FAIL: field_whitespace_v1.sh ============================ Using valgrind: no =============================================================================== [field_whitespace_v1.sh]: test for whitespace parser Out: { "b": "word2", "a": "word1 word2" } JSONs weren't equal. Expected: { "b": "word2", "a": "word1" } Actual: { "b": "word2", "a": "word1 word2" } FAIL field_whitespace_v1.sh (exit status: 1) FAIL: field_rest_v1.sh ====================== Using valgrind: no =============================================================================== [field_rest_v1.sh]: test for rest matches Out: { "label2": "40.30.20.10\/35)", "port": "35 (40.30.20.10\/35)", "ip": "10.20.30.40\/35 (40.30.20.10\/35)", "iface": "Outside:10.20.30.40\/35 (40.30.20.10\/35)" } JSONs weren't equal. Expected: { "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" } Actual: { "label2": "40.30.20.10\/35)", "port": "35 (40.30.20.10\/35)", "ip": "10.20.30.40\/35 (40.30.20.10\/35)", "iface": "Outside:10.20.30.40\/35 (40.30.20.10\/35)" } FAIL field_rest_v1.sh (exit status: 1) FAIL: field_duration_v1.sh ========================== Using valgrind: no =============================================================================== [field_duration_v1.sh]: test for duration syntax Out: { "field": "0:00:42 bytes" } JSONs weren't equal. Expected: {"field": "0:00:42"} Actual: { "field": "0:00:42 bytes" } FAIL field_duration_v1.sh (exit status: 1) FAIL: field_float_v1.sh ======================= Using valgrind: no =============================================================================== [field_float_v1.sh]: test for float field Out: { "num": "15.9 in floating pt form" } JSONs weren't equal. Expected: {"num": "15.9"} Actual: { "num": "15.9 in floating pt form" } FAIL field_float_v1.sh (exit status: 1) FAIL: field_tokenized.sh ======================== Using valgrind: no =============================================================================== [field_tokenized.sh]: test for tokenized field Out: { "more": ", abc , 456 , def ijk789", "arr": [ "123 , abc , 456 , def ijk789" ] } FAIL field_tokenized.sh (exit status: 1) FAIL: field_recursive.sh ======================== Using valgrind: no =============================================================================== [field_recursive.sh]: test for recursive parsing field Out: { "next": { "next": { "next": { "word": "def" }, "word": "456 def" }, "word": "abc 456 def" }, "word": "123 abc 456 def" } JSONs weren't equal. Expected: {"word": "123", "next": {"word": "abc", "next": {"word": "456", "next" : {"word": "def"}}}} Actual: { "next": { "next": { "next": { "word": "def" }, "word": "456 def" }, "word": "abc 456 def" }, "word": "123 abc 456 def" } FAIL field_recursive.sh (exit status: 1) FAIL: field_tokenized_recursive.sh ================================== Using valgrind: no =============================================================================== [field_tokenized_recursive.sh]: test for tokenized field with recursive field matching tokens Out: { "originalmsg": "blocked inbound via: 192.168.1.1 from: 1.2.3.4, 5.6.16.0\/12, 8.9.10.11, 12.13.14.15, 16.17.18.0\/8, 19.20.21.24\/3 to 192.168.1.5", "unparsed-data": ", 5.6.16.0\/12, 8.9.10.11, 12.13.14.15, 16.17.18.0\/8, 19.20.21.24\/3 to 192.168.1.5" } JSONs weren't equal. Expected: { "addresses": [ {"ip_addr": "1.2.3.4"}, {"subnet_addr": "5.6.16.0", "subnet_mask": "12"}, {"ip_addr": "8.9.10.11"}, {"ip_addr": "12.13.14.15"}, {"subnet_addr": "16.17.18.0", "subnet_mask": "8"}, {"subnet_addr": "19.20.21.24", "subnet_mask": "3"}], "server_ip": "192.168.1.5", "via_ip": "192.168.1.1"} Actual: { "originalmsg": "blocked inbound via: 192.168.1.1 from: 1.2.3.4, 5.6.16.0\/12, 8.9.10.11, 12.13.14.15, 16.17.18.0\/8, 19.20.21.24\/3 to 192.168.1.5", "unparsed-data": ", 5.6.16.0\/12, 8.9.10.11, 12.13.14.15, 16.17.18.0\/8, 19.20.21.24\/3 to 192.168.1.5" } FAIL field_tokenized_recursive.sh (exit status: 1) FAIL: field_interpret.sh ======================== Using valgrind: no =============================================================================== [field_interpret.sh]: test for value interpreting field Out: { "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" } JSONs weren't equal. Expected: {"session_count": 64} Actual: { "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" } FAIL field_interpret.sh (exit status: 1) FAIL: field_descent.sh ====================== Using valgrind: no =============================================================================== [field_descent.sh]: test for descent based parsing field Out: { "tm": "2014-12-08T08:53:33.05+05:30", "net": { "ip_addr": "10.20.30.40 at 2014-12-08T08:53:33.05+05:30" }, "device": "gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30" } JSONs weren't equal. Expected: {"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"} Actual: { "tm": "2014-12-08T08:53:33.05+05:30", "net": { "ip_addr": "10.20.30.40 at 2014-12-08T08:53:33.05+05:30" }, "device": "gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30" } FAIL field_descent.sh (exit status: 1) FAIL: field_descent_with_invalid_ruledef.sh =========================================== Using valgrind: no =============================================================================== [field_descent_with_invalid_ruledef.sh]: test for descent based parsing field, with invalid ruledef liblognorm error: rulebase file tmp.rulebase[1]: invalid field type 'desce' Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } liblognorm error: rulebase file tmp.rulebase[1]: invalid field type 'desce' Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } liblognorm error: rulebase file tmp.rulebase[1]: invalid field type 'desce' Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } Out: { "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } empty tail-field given Out: { "originalmsg": "A10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } JSONs weren't equal. Expected: { "net": { "tail": "foo", "ip_addr": "10.20.30.40" } } Actual: { "originalmsg": "A10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" } FAIL field_descent_with_invalid_ruledef.sh (exit status: 1) FAIL: field_suffixed.sh ======================= Using valgrind: no =============================================================================== [field_suffixed.sh]: test for field with one of many possible suffixes Out: { "originalmsg": "gc reclaimed 559mb eden [surviver: 95b\/30.2mb]", "unparsed-data": "559mb eden [surviver: 95b\/30.2mb]" } JSONs weren't equal. Expected: {"eden_free": {"value": "559", "suffix":"mb"}, "surviver_used": {"value": "95", "suffix": "b"}, "surviver_size": {"value": "30.2", "suffix": "mb"}} Actual: { "originalmsg": "gc reclaimed 559mb eden [surviver: 95b\/30.2mb]", "unparsed-data": "559mb eden [surviver: 95b\/30.2mb]" } FAIL field_suffixed.sh (exit status: 1) FAIL: field_regex_default_group_parse_and_return.sh =================================================== Using valgrind: no =============================================================================== [field_regex_default_group_parse_and_return.sh]: test for type ERE for regex field Out: { "second": "122%:7a%", "first": "foo 122%:7a%" } FAIL field_regex_default_group_parse_and_return.sh (exit status: 1) FAIL: field_regex_with_consume_group.sh ======================================= Using valgrind: no =============================================================================== [field_regex_with_consume_group.sh]: test for regex field with consume-group Out: { "rest": "ad1234abcd,4567ef12,8901abef", "first": "ad1234abcd,4567ef12,8901abef" } FAIL field_regex_with_consume_group.sh (exit status: 1) FAIL: field_regex_with_consume_group_and_return_group.sh ======================================================== Using valgrind: no =============================================================================== [field_regex_with_consume_group_and_return_group.sh]: test for regex field with consume-group and return-group ++ add_rule 'rule=:%first:regex:[a-z]{2}(([a-f0-9]+),)+:0:2%%rest:rest%' +++ rulebase_file_name +++ '[' x == x ']' +++ echo tmp.rulebase ++ rb_file=tmp.rulebase ++ echo 'rule=:%first:regex:[a-z]{2}(([a-f0-9]+),)+:0:2%%rest:rest%' ++ execute ad1234abcd,4567ef12,8901abef ++ '[' x == xon ']' ++ '[' ad1234abcd,4567ef12,8901abef == file ']' ++ echo ad1234abcd,4567ef12,8901abef ++ ../src/ln_test -oallowRegex -r tmp.rulebase -e json ++ echo Out: Out: ++ cat test.out { "rest": "ad1234abcd,4567ef12,8901abef", "first": "ad1234abcd,4567ef12,8901abef" } ++ '[' x == xon ']' ++ assert_output_contains '"first": "4567ef12"' ++ '[' x == x ']' ++ GREP=grep ++ cat test.out ++ grep -F '"first": "4567ef12"' FAIL field_regex_with_consume_group_and_return_group.sh (exit status: 1) FAIL: field_regex_with_negation.sh ================================== Using valgrind: no =============================================================================== [field_regex_with_negation.sh]: test for regex field with negation Out: { "more": "abc", "text": "123,abc" } FAIL field_regex_with_negation.sh (exit status: 1) FAIL: field_tokenized_with_regex.sh =================================== Using valgrind: no =============================================================================== [field_tokenized_with_regex.sh]: test for tokenized field with regex based field Out: { "originalmsg": "123,abc,456,def foo bar", "unparsed-data": "123,abc,456,def foo bar" } { "originalmsg": "123,abc,456,def foo bar", "unparsed-data": "123,abc,456,def foo bar" } { "originalmsg": "123,abc,456,def foo bar", "unparsed-data": "123,abc,456,def foo bar" } Using valgrind: no =============================================================================== [field_tokenized_with_regex.sh]: test for tokenized field with regex based field Out: { "originalmsg": "123,abc,456,def foo bar", "unparsed-data": ",abc,456,def foo bar" } FAIL field_tokenized_with_regex.sh (exit status: 1)